Do you think VS and Intellisense make us dumber? [closed] - visual-studio

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I read this article, the parts of "Intellisense" and "Generated Code":
http://www.charlespetzold.com/etc/DoesVisualStudioRotTheMind.html
Do you think the Author's is right?
I don't agree that Intellisense is soooo bad for programmers. VS for C# uses to "hide" the controls' events in another file, but you can find them if you know enough about the language and you can modify them by hand. And with VS I don't need to memorize all the .Net classes I use.
I think it doesn't matter if you use an IDE or notepad but, if thsese RAD tools exist and are free... Why not to use them?

No I very much disagree with this point.
Yes, I do agree that intellisense allows me to keep less of an objects growing number of members in my head. I am dumber in the sense that I often know less about the intricate details of projects where I use intellisense heavily.
For instance, I can probably rattle off all of the members of the C++ types I use with great accuracy. I tend to be a VIM only guy for my C++ projects and hence don't really use intellisnsee. In C# and VB.Net projects though I couldn't rattle off the members with the same accuracy as I rely on intellisense more often.
But there is a trade off. Keeping all of the members in my head comes with a cost. When writing code, instead of focusing on the algorithm, I focus on the members. I have to constantly think about the naming convention of a particular type, or the parameter list, what's byref or by val, when writing out an algorithm in C++. In C#/VB.Net I'm more free to think about the algorithm as the IDE takes care of finding the members for me.
Does this mean I'm dumber? No it simply means I'm able to focus on the problem I'm actually trying to solve. I feel this makes me more productive and hence smarter not dumber.

It doesn't make smart people dumber, but it makes dumb people look smarter

No, modern programming tools and languages help the programmer focus less on the little things and more on the big picture.
The main goal is to design solid software. If a programmer doesn't have to worry about memorizing every method of a class, they can spend more time on engineering the product.

Our physics prof always said why memorize something you can look up. He always listed the required formulas on the board during exams. Seems to be intellisense is the same idea. Rather than remembering if the object uses a Count or a Length property, let VS tell me.

No, it enables us to code faster I think. Anything to make the coding process faster, easier and simpler is a step in the right direction in my opinion.

Not dumber, it makes us faster :)

I use intellisense and generated code to speed up development, not because I don't know what I'm doing. Therefore, I can't agree that using them makes you dumber.
I am the kind of person that will try to learn as much about a language as possible before attempting to use the tools that facilitate development in that language. In that regard, I have to agree with Matthew Jones' comment that "tools do not make people dumber...laziness and lack of drive do."

Programming is just moving forward to make life easier for the programmer and making him more productive.
It would be like complaining that we don't write assembly code anymore... it's important to know the big concepts and ideas behind it, but working with it would be weird (in most cases).

I don't think so.
Intellisense makes things like case sensitive spelling easier.
Is it MyArray.Count() or MyArray.Size() or Length(MyArray) ... ? Which return type is a particular method, again? Intellisense saves me a few minutes every day on Google for things like this.

Detail memmorization is not the most important skill in software development. It is better to have problem solving skills and the ability to find the information you need. If you invest more time in the details you will be lost when the next greatest language is born, but algorithms and patterns will still be relevant.

The question is of course....does Intellisense make programming less of a skilled profession?

Yes, I agree with the author. Intellisense (and many other Visual Studio features) is indeed "making us dumber" for the reasons mentioned in the article.
That's not always a bad thing. Sometimes it's more desirable to be productive than it is to get smarter. The challenge is striking the right balance. :)

The only qualm with IntelliSense that the author seems to have is the autocomplete when you press the space bar, which apparently he doesn't realize you can turn off in the Options menu.
Although, he claims that coding "has become a constant dialog with IntelliSense"... which makes no sense because you still have to pick the correct methods from the list! Without it, you'd simply have to search online for the name of the method instead of an instantaneous search.
It's interesting how the author ignores that IntelliSense can't tell you whether to use a StringBuilder or a String, etc.

Not at all. When the intellisense list pops up, does a programmer search through the whole list every time to find the function they were looking for? Maybe at first, but normally you keep typing until intellisense narrows down the list to the point where it's faster to use the up/down arrows and tab to complete.
Without intellisense, it would take a little longer to code given that you are experienced with the classes that you're using and a lot longer given that you aren't. It only serves as a speed tool and quick documentation of everything that's available.

It doesn't make us dumber; it is a necessity.
Back in the day (MS BASIC for me), there was no need for intellisense. The scope of the language was limited enough for a programmer to remember all keywords and functions.
Jump to today, intellisense is an absolute requirement. Take .Net for example. There is simply no way to remember or discover the many thousands of types, properties and methods. Oh sure, for a very small project you may know a bunch (100s?) of items. But let's be honest - there is no way a modern working programmer could exist without it.

Adding my two cents here.
From my own experience and as mentioned in the TFA I would say that the only drawback I've encountered so far is when you learn the language you might pick up bad habits. Using ArrayLists instead of List only because you're not aware of changing use clauses enables might give you some other datatype.
The author complains over that he gets the wrong datatypes when entering certain datatypes. While some of you will probably get a license, a weapon and start the man hunt, I've found that using naming conventions is an excellent way of forcing the intellisense to be working my way, especially when working in GUI-Control intensive forms & stuff.

No more so than calculators made for poorer mathematicians and physicists. Sure, using a slide rule forces you to keep a mental model of the order of magnitude of things, but it is really just a tool ... and better tools let you do better work.

This can be abstracted into the traditional question:
Does knowing more about the details help or hurt?
As a rule, experienced engineers and craftsman say, help. But knowing the details also lets you know when the details don't matter, which is what Visual Studio/Intellisense provides. (I'm sure there's a pithy proverb that could be said here, but I don't feel up to thinking up a quip).

Dumb & Lazy.

Interesting question. Sure I find Intellisense in some sense makes the job easier, but it's kind of like money. The more you have, the more you spend, not necessarily on things you need. I learned to program around '62, and somehow I got along without Intellisense for a really long time. What Intellisense does for me now is help me remember lots of classes and members that as little as 4 years ago I never knew I needed.
There's one tendency I've seen in software that never fails. Nature abhors a vacuum. Machines get bigger, so guess what, software gets bigger (but not always better). Machines get faster, so software gets slower. Now people can get help typing long names, so the code gets really verbose. Now people get help remembering lots of classes, so guess what, there are lots more classes to remember. This goes a long way to helping the software get bigger and slower.
I do a lot of performance tuning, and what is the dominant cause of slowdown? It is galloping generality caused by overdesign with too much data structure, too many classes, and too many layers of abstraction. In a word, "bloat". Here is just a small example.

I find Visual Studio's tools conducive towards more experimentation. When you're dealing with the Win32 API in C (for example) you can't really poke around too easily. When you're working with C#, it's a snap to have a little explore around a library and learn what it does without breaking out MSDN or a disassembler for the entire evening.
If you're a naturally curious programmer, Intellisense won't change that. If you're not, Intellisense won't change that either. To paraphrase one of my colleagues "I think it's a waste of time looking through huge books when you can just take an implementation from the web and move on to the next thing".
It's an old argument anyway, pre-Intellisense. Does BASIC rot the mind where writing in x86 doesn't? Is knowing an algorithm inside out relevant when every single programming language you're going to use in your role has a tried and tested library?
I find that those who consider programming a hobby or a skill are inclined to comprehend and investigate. Those who consider it the day job don't. Regardless of any frippery around it, it's more about a programmer's mindset than what is made available.

Related

To be a lazy developer or not to be a lazy developer? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Am I a lazy developer?
Is it being lazy to use automated tools, such as code generators and such?
Now, I could, if I had to, create all the data layers and entities I needed, but I choose to use CodeSmith to generate my datalayers and entities.
I also use Resharper and I would say it fights with MSDeploy as to which gets installed first after Visual Studio.
Again if I had to, I could code without it, but prefer not to.
Both these tools from my point of view are no brainers as they improve output massively.
But is this lazy?
I'm sure there are purists out there that would say everything should be wirtten by you so you know what everything is doing, but if you can read through the code and see what is happening is that ok?
So am I being lazy or am I just using all the cards in my hand?
In programmers, laziness is a virtue, so don't worry.
It's only lazy if you use a tool to produce code and use it as-is without verifying that the code meets your needs and abides by your standards.
You don'nt need to reinvent the wheel n times, this is done often enough. Briefly I'd state that using tools like the ones you mentioned (within reason) is absolutely no problem...
For you? No, you're not being lazy.
For the guy that doesn't understand what code generators are doing and how they do it? Yes, it's being lazy.
That's the important distinction: You have to know what you gain and know what you're missing by using a code generator. If you don't, it's only a matter of time before you come across a case where you have to be able to produce those classes and not know how.
Both these tools from my point of view are no brainers as they improve output massively.
This means you're not being lazy, you are using the appropriate tools to enable you to concentrate on the important aspects of the job.
It's not being lazy - it's being smart. There's nothing wrong with using every tool at your disposal...as long as it makes you more productive. Using tools for the sake of using tools is a bad idea.
However, if you don't know what your tool is doing under the hood, you should learn about it so if you don't have the tool available for some reason, you can get the job done.
I think that's the wrong question. Laziness is a virtue. I've seen too many programmers who do things the hard way rather than sitting back and thinking for a few minutes to come up with an easier way. I've had so many times that I've said to a junior programmer something to the effect of, "Yes, I respect your diligence in working through lunch and staying late to write the code to do X, but if you'd taken a few minutes to check the documentation you might have seen that there is already a function in the library that does that". Or similar stories.
I'm not familiar with the specific tools you describe, but to me, the question always is, Does this tool actually save me any work? I've tried plenty of "code generators" that basically just create code stubs. So gee, thanks, you wrote the "function x(int, float)", now all I have to to is fill in the actual parameter names and write the code. What did that save me? I've also seen plenty of code generators that write really awful code. So now I have to try to add the "custom" code to this jumbled mess. Wouldn't it have been easier to just write the whole thing cleanly the first time? I've seen plenty of productivity tools where I found it takes me more time to set up the parameters to run the tool than I actually saved by using it. (Like the old joke that it's been proven that jogging regularly really does make you live longer: for every 60 minutes you spend jogging, it adds 30 minutes to your life.) Some tools may produce code or data structures or whatever that is difficult to maintain, so you save an hour today but it costs you ten hours in maintenance over the life of the project. Etc.
My conclusion isn't that you shouldn't use productivity tools, but rather that you should make sure they really are increasing your productivity, and not just giving an illusion of doing so. If in your case you find these tools really do help you, then using them is not "cheating", it's simply smart.
As everyone else already pointed out there's nothing wrong in your use of code generators.
Still I can see downsides and reasons to avoid it in certain particular sitations.
choice of language. Sometimes the very same fact you need a code generator to get your coding started could imply you're using the wrong language for the task. Most times language cannot really be chosen, so code generators remain the best way to go.
code redundancy. Depending on the actual generators used, generated code could be redundant, if this happens and generation happens once, isn't automated, and generated code goes into the main repository maintenance problems could arise in the long run. Not really a problem with code generation itself, but with the way it should and should not be used.
adding development platforms requirements. We have to concede many programmmers out there work on bread-toasters doubling as PCs. It's really a bad, (and sad) reality of cheap business practices meeting sharp minds. (sharp minds go to waste in the process) It could become a concern if our project (which could have a port in store for the future, and in an external facility either) needs an hefty, ram hogging, not enough cross platform, IDE handy to compile every little modification.
So, no definitive answer on code generating lazyness and programmming: it depends. Then again, using the wrong tools for the job is bad for your health, (and business) so... don't.
You're using all the cards in your hand. Why reinvent the wheel when there are tools available to make your job easier. Bear in mind these tools DON'T do your job, they only assist.
What you create is down to you, so using the tools is not lazy... it's just intelligent.
I'd say you're more efficient rather than lazy.
Programming is primarily a thinking exercise not a typing one. So long as you understand what the tools are doing you're shifting the balance away from typing to thinking. Doing more of what your job is about? Doesn't sound like lazy to me!
I'm sure there are purists out there that would say everything should be wirtten by you so you know what everything is doing
This might have been a viable point of view during the early days of programming. But nowadays, this is simply not feasible (or even preferable). After all, you've already obscured a certain level of understanding just by using a high-level language.
That said, I've found it to be a great learning exercise to write some of these things by hand occasionally. Not only do you get to learn more, but they teach you how helpful these tools really are (or aren't). Note that I'd only do this on a personal project though. I wouldn't do this for any project someone was paying me for (unless I were working for a masochist or something).
Ask yourself why there are so many ORM and other code-generation tools around. I'd say go for it with the proviso that you leave it maintainable for the next guy/gal.
Programming is about being lazy, about automating repetitive tasks. If you can't do that inside your language, using code generators and similar things is a useful workaround.
It depends on what you're writing, of course. I am suprised nobody's brought this up. If you're writing device drivers, operating systems, protocols, or server software (web servers, tcp driven servers, etc), you should probably do it by hand.
But with what I do and probably what a lot of us do is implement business processes in code for web pages or web services. And in those areas, if you can improve on your code with code generators, go for it.
Yes you are being a lazy developer, be honest to yourself, if you take the time to do it the hard way you can call yourself less lazy than you are now.
The point is, being lazy isn't inefficient at all.
Lazy people take time to look at the problems from different direction before acting upon it, this avoids unnecessary errors which saves you valuable time.
So you're being lazy, but that's ok. People don't hire hyperactive coders that make 10 applications each day but leave a trail of bugs on their path. bug-fixing costs time, time is money.
conclusion:
Laziness = profit
Go for it.
I think the best developers are also the laziest. Basically, all you're doing should be focused on getting the end result with the least amount of work. This often provides the best result and also avoids developers from being distracted by fun things to include in a project. A lazy developer would e.g. never add an easter egg to his code, simply because this would be more code, which could introduce more bugs that need to be fixed later on. Adding code is bad, since you'd also add more bugs that you need to resolve later. Still, you will need to add code, else you won't get paid. So, as a lazy developer you would of course choose for the most optimized code, the best-tested code which would almost never fail and you'd work in a way that the chance of errors is reduced to a minimum.
Do keep in mind that lazy developers should focus on avoiding work in the future, not on avoiding work right now! So stop reading here and get back to work! ;-)
Laziness is a trait that most good programmers have. Unless they work for Adobe, in which case they are often lazy in a bad way.

The "Should be easy for a junior developer to understand" argument [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Does anyone actually think this is a good reason to "Dumb down" your code?
When a manager asks you to make your code simple (in terms of technology skills required to understand it) at the cost of more verbose cluttered code what should you do?
I highly disagree. Junior developers will end up being Senior developers. How? By encountering advanced topics that aren't taught in school.
My code base now makes heavy use of Inversion of Control containers. I would never revert my code to the old way because a junior developer had issues groking IoC. Instead I would take them out for a beer after work and discuss it. The more the junior dev learns the less hand holding needs to be done.
Here's a blog post discussing this very topic.
If you're constantly dumbing down your code or designs, it's a pretty good way to make sure your junior developers stay dumb. Challenge them and use it as a mentoring opportunity. Of course, some will never learn, but you've got bigger problems at that point.
It's not just pointy-haired bosses either. As a senior dev, it's often difficult to resist the urge to mommy junior developers. "Oh I'll just do this part because it's way too hard for them", or it'll take them too long, or they'll get way off in the weeds.
And finally, make sure you strike a balance between idiomatic code that uses the full power of a language vs idiomatic code that abuses that power. There's no reason you need to override the || operator just to run its args in two separate threads. At least dumb the code down a little for your older, dumber, future self.
Well, I think it's reasonable to avoid using "clever" language constructs unless they really, really make the code better - at which point if a junior developer sees it, hopefully they'd ask rather than just being flumoxed.
Here's an alternative way of phrasing it though: "Write your code so that it's easy enough to understand that if you get called at 3am and asked to fix a bug in it, you can still understand it."
Seriously, make it as easy to understand as possible. That doesn't mean a comment every other line - it means a comment where the purpose of a piece of code isn't obvious, and only then where the preferred choice of "well make it obvious then" doesn't work.
There's a difference between puzzle code and complex code.
I've found that the single biggest issue is that there is a big difference between "easy to understand by reading" versus "well-factored", and that the two goals are often in direct tension with one another. In well-factored code, there is a lot more jumping around between classes and a lot of virtual dispatch, so the path through the code is very non-linear.
Yes readability and being able to easily understand code is a big part of maintainability in my opinion.
Well if you intend to maintain your code forever, never change jobs, never feel the urge to work on something new, and can assure everyone you will never be hit by a truck, then sure there is no need to dumb down that puzzle code.
No. In the past, I've learned a lot from seeing the tricks of more experienced developers. I'd much rather have had the opportunity to learn something new from them than have had them dumb things down for me.
Its a balancing act...
If any 3 people on your team can 'read' your code and know what its doing... no need to change. However if you're the only person who can understand your code (no matter how rad/clever you think it is).. maybe you should take it down a few notches.
Another guideline to help would be to 'Try the simplest thing that works.' All the latest buzz words are nice to know however what it is even more important is having the skill to spot where you could get by without using them. You don't need to spray paint your code with IOC or Frameworks or Design Patterns...
The manager's side of this argument is sorely missed in this thread :) (and for the record.. I'm not one). His/Her major concern being he doesn't want a dark area of code that no one else dares to venture into.. so if you can convince your boss that a few other people on the team can make an arbitrary fix (or better yet.. show an actual bug fixed by someone else) - the mgr should let you off the hook. Disagreeing with your boss is another art :).. but you can talk things out usually.
You dont have to go all the way backward to Lowest Common Denominator.. strike a balance.
Your goal should not be for your code to be easy to understand for a junior developer. Instead, it should be easy to understand for a maintainence programmer.
This means:
Local "complexity" is okay, when needed. If they see the complex code they'll know they need to dig deeper.
Hidden complexity is bad. If you can't see that changing a piece of code will have subtle side effects then maintaining the code will be a nightmare.
New technologies that are visible are also okay, when not taken to extremes.
This is because those that maintain code rarely have the same overall understanding of the system. Or the time to develop it.
I disagree with the manager: What needs to be simple is the code, not the technology used to write it.
I would, however, impose a closely related requirement:
The internal documentation states clearly what technologies are needed to understand this code, and it gives references to places where those technologies can be learned.
For example, even as a senior developer, I find all matrix codes baffling. But if somebody gives me a reference to the right part of Numerical Recipes, I can puzzle out the details.
Yes. It's a very valid reason to take it down a notch. The reality is that a very, very large number of developers (as in most) are at the junior level.
As far as what you should do... Say "Yes Sir" or "Yes Ma'am" and do it. There is only one boss in that relationship.
UPDATE:
As some people seem to think that having a jr dev learning advanced topics while wading through obfuscated code I want to throw one more thing in here.
When ANY developer (jr or otherwise) runs into code they don’t understand, their first approach is to refactor it into something that is understandable. This is called the “Wow that code is crap I must rewrite it!” syndrome. I’m willing to bet everyone on this board has experienced it. So, as a business owner, do I want to pay for code to be developed each time a new person comes by or do I want to pay for new features to be added?
Guess which person I’m going to keep around longer.
If you dumb down your code, you're going to be stuck working with dummy junior programmers who will never be familiar with advanced coding techniques. If there's any verbose code that's trying to express an inherently complex procedure that you wrote, the aforementioned junior developer probably wouldn't be able to see the forest for the trees anyways. And they'd probably screw up if they had to express a complex concept if all they knew were basic primitive constructs whereas if they knew how to express what they meant tersely and elegantly, the code has a better chance of being correct.
Scott Muc said:
"I've found that the single biggest issue is that there is a big difference between "easy to understand by reading" versus "well-factored", and that the two goals are often in direct tension with one another. In well-factored code, there is a lot more jumping around between classes and a lot of virtual dispatch, so the path through the code is very non-linear."
Quoted for truth, and I think this is one of the biggest problems with C++ code in general. If you're the one that wrote the code, it's pretty easy to come up with a very complicated set of stuff that is well factored, makes lots of sense if you already know it, works well, and generally resembles a diamond crystal, etc. but which, from the perspective of someone who's trying to figure out how you got there and why things are the way they are and how things work, and how one might make changes that fit into the existing system and satisfy new requirements, is almost completely opaque and impenetrable.
How does this kind of situation help maintainability? That situation is one of my main beefs with C++ programmers. Far better to have a mess of plain C code which can be hacked upon than a diamond crystal of inpenetrably super-factored code which nearly nobody can figure out how to sensibly modify without smashing the crystalline structure.
One way to "dumb down" code that I actually think is an excellent practice is to use longer variable names and longer function names. Naming variables and functions to make their purpose easily understandable is a significant engineering task, IMHO, and takes extra effort on the part of the original author of the code. Damian Conway has some excellent examples in "Perl Best Practices". Some examples include: Prefer "final_total" to "sum"; prefer "previous_appointment" to "previous_elem", prefer "next_client" to "next_elem". Prefer "sales_records" to "data". Etc. He also pushes for using grammatical templates (Noun-adjective) and staying consistent. Don't use max_displacement one place and then use VelocityMax in another. Index variables need real names too:
sales_record[i] vs sales_record[cancelled_transaction_number]
I frequently "refactor" my code at the end of a development cycle by finding new names for all my functions and variables. In a modern editor it's trivial to change them all, and it's only at the end that I really figure out what I used them for. "Dumbing down" code this way isn't classic C, but it's easier for me when I come back months later asking WTF did I do?
It depends on the code. Is this something being shipped in your flagship product that requires use of the features your manager wants you to remove for performance reasons? If the answer is yes I would try to have your manager let you keep the code and just write up a document explaining in detail the section of code that is hard to understand. If it's an internal app that needs to be maintained by lots of different people and the complex features can be removed with out negatively affecting the program remove them and pick more important battles to fight.
You should just remind your boss that you can build rocket ships or chicken coops, and he will have to pay you the same for either one. Do what they say but generally an environment like that lends itself to people looking for a new environment.
The old quote is appropriate here:
Make everything as simple as possible,
but not simpler.
I've known developers who wrote highly obfuscated code that they felt was advanced but which the rest of the team felt was unreadable and unmaintainable. Part of this involved overuse of advanced language features (in C, the ternary operator and the comma operator) and writing in an obscure personal idiom (for example, replacing ptr->item with (*ptr).item everywhere) that no-one else would ever be able to maintain. The author was trying to outsmart the optimizer (which to be fair, was far from good).
Note: I'm not suggesting that "x = (p == NULL) ? "default" : p->value;" is complicated, but when someone uses ternary operators that span many lines, are nested, and make heavy use of the comma operator, code quickly becomes unreadable.
In this sort of case, "dumbing down" the code would have been a good idea. The problem was not advanced algorithms nor advanced language features, but overuse and inappropriate use of advanced language features, and an obscure personal idiom.
However, in the case you are asking about, where the manager's changes make the code more difficult to read and maintain, I agree with you and the others who have responded. I just wanted to point out the alternative that no-one else has mentioned.
I suggest keeping the code in a "Geeky-level" and comment it well so that the juniors can understand the intention behind the code and simultaneously learn a better way (or a right way) to code, so we have the best of both he worlds.
I think it is the manager's way of politely telling you that your code is too obfuscated/complex/jumbled/puzzle code...whatever you want to call it. Sometimes we get so involved writing our codes that we forget that someone else will have to come along and read it later.
I learned it the hard way and, in retrospect, find that it was the better way. Let the cycle repeat itself.
I agree 100% with the argument. With one major addition: Train your junior developers until they understand your code ;-)
I'm talking about using "unusual" technologies. In this case it's JQuery.
This issue came up when I was writing a wizard control for user registration.
The navigation menu needed to be customised and the current step in the wizard had to have a different css class in the menu. This meant I needed to get access to the currently selected step when generating the menu. My solution was to output the current step index in a hidden html field which could then be read by JQuery in order to customise the css.
I thought that this was much nicer and cleaner than using the databinding syntax in ASP.NET which doesn't have compile-time checking and messes up the layout of the html.
The databinding solutions is "standard" while the JQuery one is "unusual" which means that it's less likely to be understood by a junior.
I'm trying more and more these days to provide the required data for the UI rather than hack it into the UI with databinding which is why I added the hidden field with the current step index.
It is simply impossible to make progress or to innovate in any industry without doing things that others don't understand. Innovation is necessarily blasphemous. Why? Because if you're doing things that make sense to everyone else around you, the odds are you're not the first one doing it. ;)
That being said, there is a significant difference between doing something that is difficult to understand simply because it's a new or complicated problem versus doing something that's difficult to understand because you're trying to show off or you think confusing people will somehow gain you job security (which I've never seen work, but I've heard of people trying).
Should you make things easy to understand? Yes absolutely, as much as humanly possible. However a program that works and does its job well is the higher priority.
The manager's complaint should never be "don't do this because our junior guys don't understand it" -- it should only ever be "do x instead of y whenever feasible because x is easier to read / understand". This also assumes that x and y are equivalent (accept the same input and produce the same result).
I can't stand when managers do that... I've had three different managers bawl me out for using perfectly normal code the way it was designed to work, not because I was doing anything complicated, but rather only because they felt like it was too much effort for the other guys on our team to go RTFM on the language we were using. As a management strategy, that's totally backwards. It's like being the Holy Roman Catholic church and insisting that the laymen are too dumb to be trusted with literacy.
If you want to know really how ridiculous some of these managers get, try this: I had one manager bawl me out for declaring a variable as a type of "boolean" because he didn't feel the other programmers could handle it. Actually when I asked why, his answer was "because we don't do that here", which is a non-answer, but I interpreted it to mean "dumb it down". They were also berating me for that and similar practices as though it should be obvious that good programming habits were actually "bad" and that I should already know why even though they had never expressed a preferred programming style (either formally or informally). Needless to say, it was a bad job.
Make sure you can understand what it does 6 months down the road.
When in doubt, COMMENT your code. That's what comments are for.

When is a new language the right tool for the job?

For a long time I've been trying different languages to find the feature-set I want and I've not been able to find it. I have languages that fit decently for various projects of mine, but I've come up with an intersection of these languages that will allow me to do 99.9% of my projects in a single language. I want the following:
Built on top of .NET or has a .NET implementation
Has few dependencies on the .NET runtime both at compile-time and runtime (this is important since one of the major use cases is in embedded development where the .NET runtime is completely custom)
Has a compiler that is 100% .NET code with no unmanaged dependencies
Supports arbitrary expression nesting (see below)
Supports custom operator definitions
Supports type inference
Optimizes tail calls
Has explicit immutable/mutable definitions (nicety -- I've come to love this but can live without it)
Supports real macros for strong metaprogramming (absolute must-have)
The primary two languages I've been working with are Boo and Nemerle, but I've also played around with F#.
Main complaints against Nemerle: The compiler has horrid error reporting, the implementation is buggy as hell (compiler and libraries), the macros can only be applied inside a function or as attributes, and it's fairly heavy dependency-wise (although not enough that it's a dealbreaker).
Main complaints against Boo: No arbitrary expression nesting (dealbreaker), macros are difficult to write, no custom operator definition (potential dealbreaker).
Main complaints against F#: Ugly syntax, hard to understand metaprogramming, non-free license (epic dealbreaker).
So the more I think about it, the more I think about developing my own language.
Pros:
Get the exact syntax I want
Get a turnaround time that will be a good deal faster; difficult to quantify, but I wouldn't be surprised to see 1.5x developer productivity, especially due to the test infrastructures this can enable for certain projects
I can easily add custom functionality to the compiler to play nicely with my runtime
I get something that is designed and works exactly the way I want -- as much as this sounds like NIH, this will make my life easier
Cons:
Unless it can get popularity, I will be stuck with the burden of maintenance. I know I can at least get the Nemerle people over, since I think everyone wants something more professional, but it takes a village.
Due to the first con, I'm wary of using it in a professional setting. That said, I'm already using Nemerle and using my own custom modified compiler since they're not maintaining it well at all.
If it doesn't gain popularity, finding developers will be much more difficult, to an extent that Paul Graham might not even condone.
So based on all of this, what's the general consensus -- is this a good idea or a bad idea? And perhaps more helpfully, have I missed any big pros or cons?
Edit: Forgot to add the nesting example -- here's a case in Nemerle:
def foo =
if(bar == 5)
match(baz) { | "foo" => 1 | _ => 0 }
else bar;
Edit #2: Figured it wouldn't hurt to give an example of the type of code that will be converted to this language if it's to exist (S. Lott's answer alone may be enough to scare me away from doing it). The code makes heavy use of custom syntax (opcode, :=, quoteblock, etc), expression nesting, etc. You can check a good example out here: here.
Sadly, there's no metrics or stories around failed languages. Just successful languages. Clearly, the failures outnumber the successes.
What do I base this on? Two common experiences.
Once or twice a year, I have to endure a pitch for a product/language/tool/framework that will Absolutely Change Everything. My answer has been constant for the last 20 or so years. Show me someone who needs support and my company will support them. And that's that. Never hear from them again. Let's say I've heard 25 of these.
Once or twice each year, I have to work with a customer who has orphaned technology. At some point in the past, some clever programming built a tool/framework/library/package that was used internally for several projects. Then that programmer left. No one else can figure that darn thing out, and they want us to replace/rewrite it. Sadly, we can't figure it out either, and our proposal is to rewrite from scratch. And they complain that their genius built the set of apps in a period of weeks, it can't take us months to rewrite them in Java/Python/VB/C#. Let's say I've written 25 or so of these kinds of proposals.
That's just me, one consultant.
Indeed one particularly sad situation was a company who's entire IT software portfolio was written by one clever guy with a private language and tools. He hadn't left, but he'd realized that his language and toolset had fallen way behind the times -- the state of the art had moved on, and he hadn't.
And the move was -- of course -- in an unexpected direction. His language and tools were okay, but the world had started to adopt relational databases, and he had absolutely no way to upgrade his junk to move away from flat files. It was something he had not foreseen. Indeed, it was something he could not possibly foresee. [You won't fall into this trap, will you?]
So, we talked. He rewrote a lot of the applications in Plain-Old VAX Fortran (yes, this is a long time ago.) And he rewrote it to use plain old relational SQL stuff (Ingres, at the time.)
After a year of coding, they were having performance problems. They called me back to review all the great stuff they'd done in replacing the home-built language. Sadly, they'd done the worst possible relational database design. Worst possible. They'd taken their file copies, merges, sorts, and what-not, and implemented each low-level file system operation using SQL, duplicating database rows left, right and center.
He was so mired in his private vision of the perfect language, that he couldn't adapt to a relatively common, pervasive new technology.
I say go for it.
It would be an awesome experience regardless of weather it makes it to production or not.
If you make it compile down to IL then you do not have to worry about not being able to re-use your compiled assemblies with C#
If you believe that you have valid complaints about the languages you listed above, it is likely that many will think like you. Of course, for every 1000 interested person there might be 1 willing to help you maintain it - but that is always the risk
But here are a few things to be cautioned about:
Get your language specification IN STONE before development. Make sure any and all language features are figured out before hand - even things that you may only want in the future. In my opinion, C# is slowly falling into the "oh-just-one-more-language-extension" trap that will lead to its eventual doom.
Be sure to make it optimized. I dont know what you already know; but if you dont know then learn ;) Nobody will want a language that has nice syntax but runs as slow as IE's javascript implementation.
Good luck :D
When I first started my career in the early 90s, there seemed to be this craze of everyone developing their own in-house languages. My first 3 jobs were with companies that had done this. One company had even developed their own operating system!
From experience, I'd say this is a bad idea for the following reasons:
1) You will spend time debugging the language itself in addition to the code base on top of it
2) Any developers you hire will need to go through the learning curve of the language
3) It will be hard to attract and keep developers since working in a proprietary language is a dead-end for someone's career
The main reason I left those three jobs was because they had proprietary languages and you'll notice that not many companies take this route any more :).
An additional argument I'd make is that most languages have entire teams whose full time job it is to develop the language. Maybe you'd be an exception, but I'd be very surprised if you'd be able to match that level of development by only working on the language part-time.
Main complaints against Nemerle: The
compiler has horrid error reporting,
the implementation is buggy as hell
(compiler and libraries), the macros
can only be applied inside a function
or as attributes, and it's fairly
heavy dependency-wise (although not
enough that it's a dealbreaker).
I see your post has been written more than two years ago.
I advise you trying Nemerle language today.
The compiler is stable. There are no blocker bugs for today.
The VS integration has a lot of improvements , also there is SharpDevelop integration.
If you give it a chance, you won't be disappointed.
NEVER EVER develop your own language.
Developing your own language is a fool's trap, and worse it will limit you to what your imagination can provide, as well demanding that you work out both your development environment and the actual programme you're writing.
The cases in which this doesn't apply are pretty much if you're Larry Wall, the AWK guys, or part of a substantial group of people dedicated to testing the boundaries of programming. If you're in any of those categories, you don't need my advice, but I strongly doubt that you're targeting a niche where there is no suitable programming language for the task AND the characteristics of the people doing the task.
If you are as clever as you seem to be (a likely possibility), my advice is to go ahead and do the design of the language first, iterate a couple of times over it, ask some smart fellows you trust in smart programming language related communities about the concrete design you came up with and then take the decision.
You might realize in the process of creating the design that just a quick hack on Nemerle would give it all you need, for example. Many things can happen just when thinking hard about a problem, and the final solution might not be what you actually had in mind when beginning the project.
Worst case scenario, you're stuck with actually implementing the design, but by then you will have it proof read and mature, and you'll know with a high degree of certainty that it was a good path to take.
A related piece of advice, start small, just define the features you absolutely need and then build on them to get the rest.
Writing your own language is not a easy project.. Especially one to be used in any kind of "professional setting"
It is a huge amount of work, and I would doubt you could write your own language, and still write any big projects that use it - you will spend so long adding features that you need, fixing bugs, and general language-design stuff.
I would strongly recommend choosing a language that is closest to what you want, and extending it to do what you need. It'll never be exactly what you want, but compared to the time you'll spend writing your own language, I would say that's a small compromise..
Scala has a .NET compiler. I don't know the status of this though. It's kind of a second class citizen in the Scala world (which is more focused on the JVM). But it might be a good tradeof to adopt the .NET compiler instead of creating a new language from scratch.
Scala is kind of weak in the meta-programming department ATM. It's possible that the need for metaprogramming is somewhat reduced by other language features. In any case I don't think anyone would be sad if you were to implement metaprogramming features for it. Also there is a compiler plug-in infrastructure on the way.
I think most languages will never fit all of the bill.
You might want to combine your 2 favourite languages (in my case C# and Scheme) and use them together.
From a professional point of view, this probably not a good idea though.
It would be interesting to hear some of the things you feel you can't do in existing languages. What kind of projects are you working on that can't be done in C#?
I'm just curios!

What are the biggest time wasters for learning programming? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've had several false starts in the past with teaching myself how to program. I've worked through several books (mostly C and Python), and end up just learning the syntax without feeling as though I could sit down and actually write a program for myself. When I try to look through the source trees of a project on Codeplex or Sourceforge, I never seem to know where to start reading the code -- the dependencies seem to go in all directions.
I feel as though I'm not learning programming the way it's done "on the street," so I figured I'd take a different approach to asking how a newbie should learn how to code. If you had to learn programming all over again, what are the things you wouldn't do? What did you spend time doing that you now know wasted you weeks or months?
Where I see beginners wasting weeks or months is typing at the keyboard. The computer is very responsive and will cheerfully chew up hours of your time in the edit-compile-run cycle. If you are learning you will save many hours if
You plan out your design on paper before you approach a computer. It doesn't matter what design method you pick or if you have never heard of a design method. Just write down a plan while your brain is fully engaged and not distracted by the computer.
When code will not compile or will not produce the right answer, if you can't fix it in five minutes, walk away from the computer. Go think about what's happening. Print out your code and scribble on it until you believe it's right.
These are just devices for helping to implement the simple but difficult old advice to think before you code.
When I was learning, I solved countless problems on the 15-minute walk from the computing center to my home. Sadly, with modern PCs we don't get that 15 minutes :-) If you can learn to take it anyway, you will become a better programmer, faster.
I certainly wouldn't start by looking at "real" software projects. Like you say, it's too hard to know where to start. That's largely because large projects are more about their large-scale design than about the individual algorithms or about program flow; for one thing, you're probably looking at a complex GUI application with multi-threading, etc. There isn't really anywhere to "start" looking at the code.
The best way to learn programming is to have a problem you want (need) to solve, and then going about solving it. But most importantly, WRITE CODE. When you read programming books, do ALL the exercises. Make sure you did them right. There's no substitute for writing code. No substitute for screwing up and then fixing it.
Stack Over F.. wait no, heh.
The biggest time-sinks for me are generally in respect to "finding the best answer." I often find that I will run into a problem that I know how to solve but feel that there is a better solution and go on the hunt for it. It is only hours/days later that I come to my senses and realize that I have 7 instances of Firefox, each containing at least 5 tabs sprawled out across 46" of monitor space that I realize that I've been caught in the black hole that is the pursuit of endless knowledge.
My advice to you, and myself for that matter, is to become comfortable with notion of refractoring. Essentially what this means (incase you are are not familiar with the term) is you come up with a solution for a problem and go with it, even if there is quite likely a better way of doing it. Once you have finished the problem, or even the program, you can then revisit your methodology, study it, and figure out where you can make changes to improve it.
This concept has always been hard for me to follow. In college I preferred to to write a paper once, print, and turn it in. Writing code can be thought of very similarly to writing a paper. Simply putting the pen to the pad and pushing out whats on your mind may work - but when you look back over it with a fresh pair of eyes you will, without question, see something you will wish you had done differently.
I just noticed you talked about reading through source trees of other people's projects. Reading other people's code is a wonderful idea, but you must read more selectively. A lot of open-source code is hard to read and not stuff you should emulate anyway. So avoid reading any code that hasn't been recommended by a programmer you respect.
Hint: Jon Bentley, Brian Kernighan, Rob Pike, and P. J. Plauger, who are all programmers I respect, have published a lot of code worth reading. In books.
The only way to learn how to program is to write more code. Reading books is great, but writing / fixing code is the best way to learn. You can't learn anything without doing.
You might also want to look at this book, How to Design Programs, for more of a perspective on design than details of syntax.
The only thing that I did that wasted weeks or months was worry about whether or not my designs were the best way to implement a particular solution. I know now that this is known as "premature optimization" and we all suffer from it to one degree or another. The right way to learn programming is to solve a problem, measure your solution to make sure it performs good enough, then move on to the next problem. After some time you'll have a pile of problems you've solved, but more importantly, you'll know a programming language.
There is excellent advice here, in other posts. Here are my thoughts:
1) Learn to type, the reasons are explained in this article by Steve Yegge. It will help more than you can imagine.
2) Reading code is generally considered a hard task. So, it is better to get an open source project, compile it, and start changing it and learn that way, rather than reading and trying to understand.
I can understand the situation you're in. Reading through books, even many will not make you programmer. What you need to do is START PROGRAMMING.
Actually programming is a lot like swimming in my opinion, even if you know only a little syntax and even lesser amount of coding techniques, start coding anyway. Make a small application, a home inventory, an expense catalog, a datesheet, a cd cataloger, anything you fancy.
The idea is to get into the nitty-gritties of it. Once you start programming you'll run into real-world problems and your problem solving skills will develop as you combat them. That's how you become a better programmer everyday.
So get into the thick of it, and swim right through... That's how you'll make it.
Good luck
I think this question will have wildly different answers for different people.
For myself, I tried C++ at one point (I was about ten and had already been programming for a while), with a click-and-drag UI builder. I think this was a mistake, and I should have gone straight to C and pointers and such. Because I'm just that kind of person.
In your case, it sounds like you want to be led down the right path by someone and feel a bit timid about jumping in and doing something by yourself. (You've read several books and now you're asking what not to do.)
I'll tell you how I learned: by doing plenty of fun, relatively short projects, steadily growing in difficulty. I began with QBasic (which I think is still a great learning tool) and it was there where I developed most of my programming skills. They have of course been expanded and refined since that time but I was already capable of good design back in those days.
The sorts of projects you could take on depend on your interests; if you're mathematically inclined you might want to try a prime number generator or projecting 3D points onto the screen; if you're interested in game design then you could try cloning pong (easy) or minesweeper (harder); or if you're more of a hacker you might want to make a simple chat program or file encryption software.
Work on these projects on your own, and don't worry about whether you're doing things the "right" way. As long as you get it to work, you've learned many things. Some time after you've completed a project you may want to revisit it and try to do it better, or just see how other people have done that sort of thing.
Given the way you seem to want to be led along, perhaps you should find yourself a mentor.
Do not learn how to use pointers and how to manually manage memory. You mentioned C, and I spent plenty of time trying to fix bugs that were caused by mixing *x and &x. This is evil...
Find some problem to solve, write or draw a sketch of an algorithm solving the problem, then try to write it. Either use Python (which is much more friendly for beginners) or use C with statically allocated memory only. And use books/tutorials. They offer multiple excercises with solutions, so you can compare yours with them and see other approaches.
Once you'll feel that you can actually write something simple, see some book/tutorial for Object Oriented Design. It's not the best the world has to offer, but it might turn out to be intuitive. If not, check the functional programming (like LISP, Scheme or Haskell languages), or programming in logic (like Prolog). Maybe those will suit you better.
Also - find some mate. A person you can talk to about coding, code maintenance and design. Such person is worth even more than a book.
To all C fans: The C language is great, really. It allows memory usage optimization to the extent impossible in high-level languages as Python or Ruby. The compiled code is also very fast, and is the only choice for RTOS, or modern 3D games engine. But this is not a good entry point for a beginner, that's what I believe.
Oh, and good luck to you! And don't be ashamed to ask! If you don't ask, the answer is much harder to find.
Assuming you have decent math skills try http://projecteuler.net/ It presents a series of problems to solve of increasing dificulty that should be solvible by writing short programs. This should give you experience in solving specific problems with out getting lost in the details of open source projects.
After basic language syntax, you need to learn design. Which is hard. This book may help.
I think you should stop thinking you've wasted time so far-- instead I think you're education is just incomplete, and you've taken a step you're not really ready for. It sounds like the books you've read are useful, you're learning the intricacies of the language. It sounds like you're just not accustomed to the tools you'd use then to package that code together so it runs.
Some books cover that focus on topics like language syntax, design patterns, algorithms and data structures will never mention the tools you need to actual apply that information. These books are great but if its all you've touched I think it would explain your situation.
What development environment are you using? If you're developing for windows you really should be proficient with creating projects, adding code, running and debugging in Visual Studio. You can download Visual Studio Express for free from Microsoft.
I recommend looking for tutorial like books that actually step you through the UI of development environment you are using. Look for actual screenshots with dropdown menus. Look at what the tutorials walk you through, and if its something you don't know how to do consider buying that book. Preferably it will have code you can copy'n'paste in, not code you write yourself.
I personally don't like these books as I can anticipate how to do new things in VS based on how I'd do other things. But if you're training is incomplete from a tools-usage perspective this could move you in the right direction.
It is probably harder to find these types of tutorial books for Python or C development. There is an overabundance of them for .Net development though.
As someone who has only been working as a programmer for 6 months, I might not be the best person to help you get going, but since it wasn't that long ago when I knew next to nothing, its quite fresh in my mind.
When I started my current job programming wasn't going to be part of my job description but when the opportunity came up to do some programming on the side, I couldn't pass it up.
I spent about 1 month doing tutorials on About.com's Delphi section. As much as people diss about.com, Zarko Gajic's tutorials were simple to understand and easy to follow. Once I had a basic knack of the language and the IDE, I jumped straight into a project exporting accounting data for a program called "Adept". Took me a while but I got there...
The biggest help for me was taking on a personal project. I developed an IRC bot in Java for a crappy 2D game called Soldat. I learnt a lot by planning out and coding my own project.
Now I'm pretty comfortable with Delphi Pascal, SQL, C# and Java. I think, once you get the hang of one OOP language, you can learn the syntax of another language, and it gets a lot easier to catch on.
Perhaps start with a small existing project, and find some thing within it that handles some core part of what it does - then with a debugger, step through it and follow what it's doing from the point where you ask it to do that thing for you.
This helps you in a number of ways. You start to better grasp all of the various things that are touched by the code as it attempts to complete its request. Also, you learn invaluable debugging techniques which it seems like far too many developers lack - while you can often eventually discover what is wrong either with repeated printf() (or equivalent) calls, if you can debug you can solve issues an order of magnitude faster.
I have found that conceptually, a great mental model for understanding programming in the abstract is a pattern of data flow. When a user manipulates data, how is it altered by a program for digestion and storage? How is it transformed to re-present to the user in a form that makes sense to them? Fundamentally code is about transformation of data, and all code can be broken down into constructs of various sizes whose purpose is to alter data in one way or another, bugs forming around the mismatch between what the programmer was expecting from the data, how high level libraries the coder is using treat the data, and how the data actually arrives. Following code with a debugger helps you fully understand this transformation in action by observing changes as they occur.
Standard answer is to make something; picking an easy language to do it in is good, but not essential. It's more the working out stuff in your own head, fixing it because it won't work, that really teaches you. For me, this always happens when I try my eternal dream projects (games) which I never finish but always learn from.
I think the thing I would avoid is learning a language in isolated snippets that don't really hang together but just teach various facets of a particular language. As others have said, the really hard and important thing is to learn design. I think the best way to do this is through a tutorial that walks you through creating an actual application, teaching design along the way. That way you can learn why certain decisions are made and learn how to accomplish what's needed to implement the design choices.
For example, I found Agile Web Development with Rails to be a really easy way to learn Ruby on Rails, much better than simply reading a Ruby manual or even poking my way around scattered web tutorials.
Another thing that I would avoid is developing code in isolation, that is, not having people look at it as I go along. Getting feedback from a mentor will help keep you on the right track with respect to the choices you are making and the correct use of language idioms.
Find a problem in your life or something you do that you just feel could be more efficient and write a small solution to it. It might just be a single script but you will gain much more confidence in your abilities when you start to see useful results of your work. You will also be more motivated to finish it as you are interested in using the solution. Start simple and small and then gradually move up to bigger projects.
And as your working on a small project, focus on building everything with quality. I think this is lost on some programmers who feel that their software is more impressive if it contains a ton of features but usually those features aren't well done or usable. If you focus on building quality solutions to real problems you'll be a fantastic programmer.
Good luck!
Work on projects/problems that you already know how to solve partially
You should read Mike clark's article : How I Learned Ruby. Essentially, he used the test framework for Ruby to exercise different elemnents of the languages.
I used this technique to learn python and it was very, very helpful. Not only did i learn the language, but I was very proficient in the test framework for Python at the end of the excercise. Once you have the basics you can start reading code and then working on building some larger project.

How do you tell someone they're writing bad code? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Closed 10 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I've been working with a small group of people on a coding project for fun. It's an organized and fairly cohesive group. The people I work with all have various skill sets related to programming, but some of them use older or outright wrong methods, such as excessive global variables, poor naming conventions, and other things. While things work, the implementation is poor. What's a good way to politely ask or introduce them to use better methodology, without it coming across as questioning (or insulting) their experience and/or education?
Introduce questions to make them realise that what they are doing is wrong. For example, ask these sort of questions:
Why did you decide to make that a global variable?
Why did you give it that name?
That's interesting. I usually do mine this way because [Insert reason why you are better]
Does that way work? I usually [Insert how you would make them look silly]
I think the ideal way of going about this is subtly asking them why they code a certain way. You may find that they believe that there are benefits to other methods. Unless I knew the reason for their coding style was due to misinformation I would never judge my way as better without good reason. The best way to go about this is to just ask them why they chose that way; be sure to sound interested in their reasoning, because that is what you need to attack, not their ability.
A coding standard will definitely help, but if it were the answer to every software project then we'd all be sipping cocktails on our private islands in paradise. In reality, we're all prone to problems and software projects still have a low success rate. I think the problem would mostly stem from individual ability rather than a problem with convention, which is why I'd suggest working through the problems as a group when a problem rears its ugly head.
Most importantly, do NOT immediately assume that your way is better. In reality, it probably is, but we're dealing with another person's opinion and to them there is only one solution. Never say that your way is the better way of doing it unless you want them to see you as a smug loser.
Start doing code reviews or pair programming.
If the team won't go for those, try weekly design reviews. Each week, meet for an hour and talk about a peice of code. If people seem defensive, pick old code that no one is emotionally attached to any more, at least at the beginning.
As #JesperE: said, focus on the code, not the coder.
When you see something you think should be different, but others don't see it the same way, then start by asking questions that lead to the deficiencies, instead of pointing them out. For example:
Globals: Do you think we'll ever want to have more than one of these? Do you think we will want to control access to this?
Mutable state: Do you think we'll want to manipulate this from another thread?
I also find it helpful to focus on my limitations, which can help people relax. For example:
long functions: My brain isn't big enough to hold all of this at once. How can we make smaller pieces that I can handle?
bad names: I get confused easily enough when reading clear code; when names are misleading, there's no hope for me.
Ultimately, the goal is not for you to teach your team how to code better. It's to establish a culture of learning in your team. Where each person looks to the others for help in becoming a better programmer.
Introduce the idea of a code standard. The most important thing about a code standard is that it proposes the idea of consistency in the code base (ideally, all of the code should look like it was written by one person in one sitting) which will lead to more understandable and maintainable code.
You have to explain why your way is better.
Explain why a function is better than cutting & pasting.
Explain why an array is better than $foo1, $foo2, $foo3.
Explain why global variables are dangerous, and that local variables will make life easier.
Simply whipping out a coding standard and saying "do this" is worthless because it doesn't explain to the programmer why it's a good thing.
First, I'd be careful not to judge too quickly. It's easy to dismiss some code as bad, when there might be good reasons why it's so (eg: working with legacy code with weird conventions). But let's assume for the moment that they're really bad.
You could suggest establishing a coding standard, based on the team's input. But you really need to take their opinions into account then, not just impose your vision of what good code should be.
Another option is to bring technical books into the office (Code Complete, Effective C++, the Pragmatic Programmer...) and offer to lend it to others ("Hey, I'm finished with this, anyone would like to borrow it?")
If possible, make sure they understand that you're critizising their code, not them personally.
Suggest a better alternative in a non-confrontational way.
"Hey, I think this way will work too. What do you guys think?" [Gesture to obviously better code on your screen]
Have code reviews, and start by reviewing YOUR code.
It will put people at ease with the whole code review process because you are beginning the process by reviewing your own code instead of theirs. Starting off with your code will also give them good examples of how to do things.
They may think your style stinks too. Get the team together to discuss a consistent set of coding style guidelines. Agree to something. Whether that fits your style isn't the issue, settling on any style as long as it's consistent is what matters.
By example. Show them the right way.
Take it slow. Don't thrash them for every little mistake right off the bat, just start with things that really matter.
The code standard idea is a good one.
But consider not saying anything, especially since it is for fun, with, presumably, people you are friends with. It's just code...
There's some really good advice in Gerry Weinberg's book "The Psychology of Computer Programming" - his whole notion of "egoless programming" is all about how to help people accept criticism of their code as distinct from criticism of themselves.
Bad naming practices: Always inexcusable.
And yes, do no always assume that your way is better... It can be difficult, but objectivity must be maintained.
I've had an experience with a coder that had such horrible naming of functions, the code was worse than unreadable. The functions lied about what they did, the code was nonsensical. And they were protective/resistant to having someone else change their code. when confronted very politely, they admitted it was poorly named, but wanted to retain their ownership of the code and would go back and fix it up "at a later date."
This is in the past now, but how do you deal with a situation where they error is ACKNOWLEDGED, but then protected? This went on for a long time and I had no idea how to break through that barrier.
Global variables: I myself am not THAT fond of global variables, but I know a few otherwise excellent programmers that like them A LOT. So much so that I've come to believe they are not actually all that bad in many situations, as they allow for clarity, ease of debugging. (please don't flame/downvote me :) ) It comes down to, I've seen a lot of very good, effective, bug free code that used global variables (not put in by me!) and great deal of buggy, impossible to read/maintain/fix code that meticulously used proper patterns. Maybe there IS a place (though shrinking perhaps) for global variables? I'm considering rethinking my position based on evidence.
Start a wiki on your network using some wiki software.
Start a category on your site called "best practices" or "coding standards" or something.
Point everyone to it. Allow for feedback.
When you do releases of the software, have the person whose job it is to put code into the build push back on developers, pointing them to the Wiki pages on it.
I've done this in my organization and it took several months for people to really get into the hang of using the Wiki but now it's this indispensable resource.
If you have even a loose standard of coding, being able to point to that, or indicating that you can't follow the code because it's not the correct format may be worthwhile.
If you don't have a coding format, now would be a good time to get one in place. Something like the answers to this question may be helpful: https://stackoverflow.com/questions/4121/team-coding-styles
I always go with the line 'This is what I would do'. I don't try and lecture them and tell them their code is rubbish but just give an alternative viewpoint that can hopefully show them something that is obviously a bit neater.
Have the person(s) in question prepare a presentation to the rest of the group on the code for a representative module they have written, and let the Q&A take care of it (trust me, it will, and if it's a good group, it shouldn't even get ugly).
I do love code, and never had any course in my live about anything related to informatics I started very bad and started to learn from examples, but what I always remember and kept in my mind since I read the "Gang Of Four" book was:
"Everyone can write code that is understood by a machine, but not all can write code that is understood by a human being"
with this in mind, there is a lot to be done in the code ;)
I can't emphasize patience enough. I've seen this exact sort of thing completely backfire mostly because someone wanted the changes to happen NOW. Quite a few environments need the benefits of evolution, not revolution. And by forcing change today, it can make for a very unhappy environment for all.
Buy-in is key. And your approach needs to take into account the environment you are in.
It sounds like you're in an environment that has a lot of "individuality" to it. So... I wouldn't suggest a set of coding standards. It will come across that you want to take this "fun" project and turn it into a highly structured work project (oh great, what's next... functional documents?). Instead, as someone else said, you'll have to deal with it to a certain extent.
Stay patient and work toward educating others in your direction. Start with the edges (points where your code interacts with others) and when interacting with their code try to take it as an opportunity to discuss the interface they've created and ask them if it would be okay with them if it was changed (by you or them). And fully explain why you want the change ("it will help deal with changing subsystem attributes better" or whatever). Don't nit-pick and try to change everything you see as being wrong. Once you interact with others on the edge, they should start to see how it would benefit them at the core of their code (and if you get enough momentum, go deeper and truly start to discuss modern techniques and the benefits of coding standards). If they still don't see it... maybe you'll need to deal with that within yourself (especially on a "fun" project).
Patience. Evolution, not revolution.
Good luck.
I don a toga and open a can of socratic method.
The Socratic Method named after the Classical Greek philosopher Socrates, is a form of philosophical inquiry in which the questioner explores the implications of others' positions, to stimulate rational thinking and illuminate ideas. This dialectical method often involves an oppositional discussion in which the defense of one point of view is pitted against another; one participant may lead another to contradict himself in some way, strengthening the inquirer's own point.
A lot of the answers here relate to code formatting which these days is not particularly relevant, as most IDEs will reformat your code in the style you choose. What really matters is how the code works, and the poster is right to look at global variables, copy & paste code, and my pet peeve, naming conventions. There is such a thing as bad code and it has little to do with format.
The good part is that most of it is bad for a very good reason, and these reasons are generally quantifiable and explainable. So, in a non-confrontational way, explain the reasons. In many cases, you can even give the writer scenarios where the problems become obvious.
I'm not the lead developer on my project and therefore can't impose coding standards but I have found that bad code usually causes an issue sooner rather than later, and when it does i'm there with a cleaner idea or solution.
By not interjecting at the time and taking a more natural approach i've gained more trust with the lead and he often turns to me for ideas and includes me on the architectural design and deployment strategy used for the project.
People writing bad code is just a symptom of ignorance (which is different from being dumb). Here's some tips for dealing with those people.
Peoples own experience leaves a stronger impression than something you will say.
Some people are not passionate about the code they produce and will not listen to anything you say
Paired Programming can help share ideas but switch who's driving or they'll just be checking email on their phone
Don't drown them with too much, I've found even Continuous Integration needed to be explained a few times to some older devs
Get them excited again and they will want to learn. It could be something as simple as programming robots for a day
TRUST YOUR TEAM, coding standards and tools that check them at build time are often never read or annoying.
Remove Code Ownership, on some projects you will see code silos or ant hills where people say thats my code and you can't change it, this is very bad and you can use paired programming to remove this.
Instead of having them write code, have them maintain their code.
Until they have to maintain their steaming pile of spaghetti, they will never understand how bad they are at coding.
Nobody likes to listen someone saying their work sucks, but any sane person would welcome mentoring and ways of avoiding unnecessary work.
One school of teaching even says that you should not point out mistakes, but focus what is done right. For instance, instead of pointing out incomprehensible code as bad, you should point out where their code is particularly easy to read. In the first case you are priming others to think and act like crappy programmers. In the later case you are priming for thinking like a skilled professional.
I have a similar senario with the guys i work with.. They dont have the exposure to coding as much as i do but they are still usefull at coding.
Rather than me letting the do what they want and go back and edit the whole thing. I usually just sit them down and show them two ways of doing things. Thier way and My way, From this we discuss the pro's and cons of each method and therefore come to a better understanding and a better conclusion on how should we go about programming.
Here is the really suprizing part. Sometimes they will come up with questions that even i dont have answers to, and after research we all get a better concept of methodology and structure.
Discuss.
Show them Why
Don't even think you are always right.. Sometimes even they will teach you something new.
Thats what i would do if i was you :D
Probably a bit late after the effect, but that's where an agreed coding standard is a good thing.
I frankly believe that someone's code is better when it's easier to change, debug, navigate, understand, configure, test and publish (whew).
That said I think it is impossible to tell someone his/her code is bad without having a first go at having him / her explaining what it does or how is anyone supposed to enhance it afterwards (like, creating new funcionality or debugging it).
Only then their mind snaps and anyone will be able to see that:
Global variables value changes are almost always untrackable
Huge functions are hard to read and understand
Patterns make your code easier to enhance (as long as you obay to their rules)
( etc...)
Perhaps a session of pair programming should do the trick.
As for enforcing coding standards - it helps but they are too far away from really defining what is good code.
You probably want to focus on the impact of the bad code, rather than what might be dismissed as just your subjective opinion of whether it's good or bad style.
Privately inquire about some of the "bad" code segments with an eye toward the possibility that it is actually reasonable code, (no matter how predisposed you may be), or that there are perhaps extenuating circumstances. If you are still convinced that the code is just plain bad -- and that the source actually is this person -- just go away. One of several things may happen: 1) the person notices and takes some corrective action, 2) the person does nothing (is oblivious, or doesn't care as much as you do).
If #2 happens, or #1 does not result in sufficient improvement from your point of view, AND it is hurting the project, and/or impinging on you enough, then it may be time to start a campaign to establish/enforce standards within the team. That requires management buy-in, but is most effective when instigated from grass roots.
Good luck with that. I feel your pain brother.

Resources