Related
I wondered if any of you have knowledge of the internal workings of windows (kernel, interrupts, etc) and if you've found that you've become a better developer as a result?
Do you find that the more knowledge the better is a good motto to have as a developer?
I find myself studying a lot of things, thinking with more understanding, I'll be a better developer. Of course practice and experience also comes into play.
This is a no brainier - absolutely (assuming you're a developer primarily on the Windows platform, of course). A working knowledge of how the car engine works will make a lot of common programming tasks (debugging, performance work, etc) a lot easier.
Windows Internals is the standard reference.
I believe it is valuable to understand how things work underneath. CLR/.NET to C++, native to ASM, ASM to CPU architecture, building registers and ops from logical gates, logical gates from MOSFETs, transistors from quantum physics and the latter from respective mathematical apparatus (group theory, etc).
Understanding low level makes you not only think different but also feel different - like you are in control of things, standing on the shoulders of giants.
More knowledge is always better, and having knowledge at many levels is a lot more valuable than just knowing whatever layer of abstraction you are working at.
A good rule of thumb is that you should have a good knowledge of the layer below the layer where you are working. So, for example, if you write a lot of .NET code, you should know how the CLR works. If you write a lot of web apps, you should understand HTTP. If you writing code that uses HTTP directly, then you should understand TCP/IP. If you are implementing a TCP/IP stack, then you need to understand how Ethernet works.
Knowledge of Windows internals is really helpful if you are writing native Win32 code, or if OS performance issues are critical to what you are doing. At higher levels of abstraction, it may be less helpful, but it never hurts.
I dont think that one requires special or secret knowledge of internals such as those that may be extended to members of the windows team or those with source access but I absolutely contend that understanding internals helps you become a better developer.
Take threading for instance, if you are going to build an application that uses threading in even a moderate way - understanding how windows works, how the threading works, how memory processes work are all keys to being able to do a good job with that code.
I agree to a point with your edict but I would not agree that experience/practice/knowledge are mutually exclusive. That net-net of experience is that you have knowledge gained from that experience. There is also a wisdom component to experience and practice but those are usually intangible situational elements that you apply in the future to avoid mistakes. Bottom line knowledge is a precipitate of experience.
Think of it this way, how many people do you know with 30+ years of experience in IT, think of them and take the top two. Now go into that memory bank and think of the people you know in the industry who are super smart, who know so much about so many things and pick the top two of those. You now have your final 4 - if you had to pick one to start a project with who would it be? Invariably we pick the super smart guy.
Yes, understanding Windows internals helped me to become a better programmer. It also taught be a lot of bad practices, bad ideas, and poor design concepts.
I highly suggest studying OS X or Linux internals as an alternative. It'll take less time, make more sense, and be much more productive.
Read code. Read lots of code. Read lots of good code. jQuery, Django, AIR framework source, Linux kernel, compilers.
Try to learn programming languages that introduce you to new approaches, like Lisp, Ruby, Python, or Javascript. OOP is good, but .net and Java seem to take the brainwash approach on it, and elevate it to some kind of religious level, instead of it just being a good tool in your toolbox.
If you don't understand the code you are reading, it likely means you are on the right track, and learning new techniques.
I'd suggest getting a mac simply because you'll find yourself wanting to make your UIs simpler and easier. It's really important to have a good environment if you want to become a great programmer. Surround yourself with engineers better than yourself (if you can), work with frameworks and languages that take the 'engineer' approach vs. the 'experimenter' approach, and... use a operating system that contains code better than yours.
I'd also reccomend the book "Coders at Work".
It depends. Many programmers who understand the internals of a system begin writing optimised code to exploit that knowledge. This has three very serious side-effects:
1.) It's harder for others without that knowledge to extend or support the code.
2.) System internals may change without notice, whereas interfaces are usually versioned and changes discussed publicly.
3.) Interfaces are generally consistent across platform revisions and hardware, internals do not have this consistency.
In short, There's a lot of broken, unsupportable code out there that's borked because it relies on an internal process that the vendor changed without notice.
Father of language C said that "you don't need to learn all features of language to write great codes. Better you understand the problem, better you write the code." Having knowledge is always better.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I read this article, the parts of "Intellisense" and "Generated Code":
http://www.charlespetzold.com/etc/DoesVisualStudioRotTheMind.html
Do you think the Author's is right?
I don't agree that Intellisense is soooo bad for programmers. VS for C# uses to "hide" the controls' events in another file, but you can find them if you know enough about the language and you can modify them by hand. And with VS I don't need to memorize all the .Net classes I use.
I think it doesn't matter if you use an IDE or notepad but, if thsese RAD tools exist and are free... Why not to use them?
No I very much disagree with this point.
Yes, I do agree that intellisense allows me to keep less of an objects growing number of members in my head. I am dumber in the sense that I often know less about the intricate details of projects where I use intellisense heavily.
For instance, I can probably rattle off all of the members of the C++ types I use with great accuracy. I tend to be a VIM only guy for my C++ projects and hence don't really use intellisnsee. In C# and VB.Net projects though I couldn't rattle off the members with the same accuracy as I rely on intellisense more often.
But there is a trade off. Keeping all of the members in my head comes with a cost. When writing code, instead of focusing on the algorithm, I focus on the members. I have to constantly think about the naming convention of a particular type, or the parameter list, what's byref or by val, when writing out an algorithm in C++. In C#/VB.Net I'm more free to think about the algorithm as the IDE takes care of finding the members for me.
Does this mean I'm dumber? No it simply means I'm able to focus on the problem I'm actually trying to solve. I feel this makes me more productive and hence smarter not dumber.
It doesn't make smart people dumber, but it makes dumb people look smarter
No, modern programming tools and languages help the programmer focus less on the little things and more on the big picture.
The main goal is to design solid software. If a programmer doesn't have to worry about memorizing every method of a class, they can spend more time on engineering the product.
Our physics prof always said why memorize something you can look up. He always listed the required formulas on the board during exams. Seems to be intellisense is the same idea. Rather than remembering if the object uses a Count or a Length property, let VS tell me.
No, it enables us to code faster I think. Anything to make the coding process faster, easier and simpler is a step in the right direction in my opinion.
Not dumber, it makes us faster :)
I use intellisense and generated code to speed up development, not because I don't know what I'm doing. Therefore, I can't agree that using them makes you dumber.
I am the kind of person that will try to learn as much about a language as possible before attempting to use the tools that facilitate development in that language. In that regard, I have to agree with Matthew Jones' comment that "tools do not make people dumber...laziness and lack of drive do."
Programming is just moving forward to make life easier for the programmer and making him more productive.
It would be like complaining that we don't write assembly code anymore... it's important to know the big concepts and ideas behind it, but working with it would be weird (in most cases).
I don't think so.
Intellisense makes things like case sensitive spelling easier.
Is it MyArray.Count() or MyArray.Size() or Length(MyArray) ... ? Which return type is a particular method, again? Intellisense saves me a few minutes every day on Google for things like this.
Detail memmorization is not the most important skill in software development. It is better to have problem solving skills and the ability to find the information you need. If you invest more time in the details you will be lost when the next greatest language is born, but algorithms and patterns will still be relevant.
The question is of course....does Intellisense make programming less of a skilled profession?
Yes, I agree with the author. Intellisense (and many other Visual Studio features) is indeed "making us dumber" for the reasons mentioned in the article.
That's not always a bad thing. Sometimes it's more desirable to be productive than it is to get smarter. The challenge is striking the right balance. :)
The only qualm with IntelliSense that the author seems to have is the autocomplete when you press the space bar, which apparently he doesn't realize you can turn off in the Options menu.
Although, he claims that coding "has become a constant dialog with IntelliSense"... which makes no sense because you still have to pick the correct methods from the list! Without it, you'd simply have to search online for the name of the method instead of an instantaneous search.
It's interesting how the author ignores that IntelliSense can't tell you whether to use a StringBuilder or a String, etc.
Not at all. When the intellisense list pops up, does a programmer search through the whole list every time to find the function they were looking for? Maybe at first, but normally you keep typing until intellisense narrows down the list to the point where it's faster to use the up/down arrows and tab to complete.
Without intellisense, it would take a little longer to code given that you are experienced with the classes that you're using and a lot longer given that you aren't. It only serves as a speed tool and quick documentation of everything that's available.
It doesn't make us dumber; it is a necessity.
Back in the day (MS BASIC for me), there was no need for intellisense. The scope of the language was limited enough for a programmer to remember all keywords and functions.
Jump to today, intellisense is an absolute requirement. Take .Net for example. There is simply no way to remember or discover the many thousands of types, properties and methods. Oh sure, for a very small project you may know a bunch (100s?) of items. But let's be honest - there is no way a modern working programmer could exist without it.
Adding my two cents here.
From my own experience and as mentioned in the TFA I would say that the only drawback I've encountered so far is when you learn the language you might pick up bad habits. Using ArrayLists instead of List only because you're not aware of changing use clauses enables might give you some other datatype.
The author complains over that he gets the wrong datatypes when entering certain datatypes. While some of you will probably get a license, a weapon and start the man hunt, I've found that using naming conventions is an excellent way of forcing the intellisense to be working my way, especially when working in GUI-Control intensive forms & stuff.
No more so than calculators made for poorer mathematicians and physicists. Sure, using a slide rule forces you to keep a mental model of the order of magnitude of things, but it is really just a tool ... and better tools let you do better work.
This can be abstracted into the traditional question:
Does knowing more about the details help or hurt?
As a rule, experienced engineers and craftsman say, help. But knowing the details also lets you know when the details don't matter, which is what Visual Studio/Intellisense provides. (I'm sure there's a pithy proverb that could be said here, but I don't feel up to thinking up a quip).
Dumb & Lazy.
Interesting question. Sure I find Intellisense in some sense makes the job easier, but it's kind of like money. The more you have, the more you spend, not necessarily on things you need. I learned to program around '62, and somehow I got along without Intellisense for a really long time. What Intellisense does for me now is help me remember lots of classes and members that as little as 4 years ago I never knew I needed.
There's one tendency I've seen in software that never fails. Nature abhors a vacuum. Machines get bigger, so guess what, software gets bigger (but not always better). Machines get faster, so software gets slower. Now people can get help typing long names, so the code gets really verbose. Now people get help remembering lots of classes, so guess what, there are lots more classes to remember. This goes a long way to helping the software get bigger and slower.
I do a lot of performance tuning, and what is the dominant cause of slowdown? It is galloping generality caused by overdesign with too much data structure, too many classes, and too many layers of abstraction. In a word, "bloat". Here is just a small example.
I find Visual Studio's tools conducive towards more experimentation. When you're dealing with the Win32 API in C (for example) you can't really poke around too easily. When you're working with C#, it's a snap to have a little explore around a library and learn what it does without breaking out MSDN or a disassembler for the entire evening.
If you're a naturally curious programmer, Intellisense won't change that. If you're not, Intellisense won't change that either. To paraphrase one of my colleagues "I think it's a waste of time looking through huge books when you can just take an implementation from the web and move on to the next thing".
It's an old argument anyway, pre-Intellisense. Does BASIC rot the mind where writing in x86 doesn't? Is knowing an algorithm inside out relevant when every single programming language you're going to use in your role has a tried and tested library?
I find that those who consider programming a hobby or a skill are inclined to comprehend and investigate. Those who consider it the day job don't. Regardless of any frippery around it, it's more about a programmer's mindset than what is made available.
I'm a computer science student designing a project and I've started wondering what are good examples or software, or even hardware that are toeing the line between being feature rich with good usable features for regular users and being too intimidating for new users. Also could anyone recommend any good tips/books for designing good quality applications that are feature rich but not "bloated"?
"Make everything as simple as possible, but not simpler." - Albert Einstein
"Perfection is reached not when there is nothing left to add, but when there is nothing left to take away." - Antoine de Saint-Exupéry
I am not trying to be flippant but these quotes really are the best advice. Simplicity of design should be your goal. Not that achieving simplicity is easy! On the contrary, it is quite difficult but it is possible.
Try thinking about things a bit differently. Rather than
How many things can I add before this becomes bloated?
try
What are the fewest number of features and elements I can include while still providing a superior experience for my users?
Here's a good set of slides from a presentation on the topic: Rescue Princess 2.0.
The first order of business should just be keeping the application easy to use. Beyond that, all I can say is, beware of writing features for an imaginary user: make sure someone actually needs it before you start coding.
As a direct answer to your question: pretty much any Microsoft product. I'm showing my bias here, but Microsoft has a strong tendency to keep their codebase, and add features on top of features until the original functionality of the app is nearly lost beneath mounds of accreted crud.
Look at MS Word, for example; while you can still just open it up and start typing, god forbid if you want to renumber a section of your document while leaving the rest alone. Heaven forbid if you want to generate a Table of Contents that includes references to an Appendix. This sort of stuff is something that is de rigeur for Word Processors, and Word supports it, it just supports it in a way that you cannot get it done without a manual, several cups of coffee, and bandages to stop the bleeding from banging your head on the desk.
Microsoft isn't alone in doing this; this thing tends to happen all the time, with all sorts of products; but they are among the worst offenders, I've found.
1: What do your users need, and want, and
2: Which features will you have time to implement?
Your question is pretty general. Which features constitute bloat? That kind of depends on whether you're writing an antivirus scanner, an OS or a word processor.
There is no clear barrier between "good" and "too much".
However, it depends on what you want to do.
If you're developing a SDK, I recommend splitting your implementation in several small libraries(rather than just one big SDL library, there is the SDL core, SDL_Mixer, SDL_Image, etc.)
If you're developing an application, keep a module-based system and a plug-in mechanism.
That way, new features can be added more easily and bloat can be more easily detected.
You may get to a point where you'll add new features some will consider "great" and others "bloat". Otherwise, your application may reach a point that some will call it "feature-poor" and others will call it "just enough".
This isn't an exact quote, but the idea was something like this:
A piece of software is perfect not when there is nothing more to add, but when there is nothing more to remove.
In essence, the simpler and more to-the-point is a software, the better.
To get examples of good software design, take a look at programs that are popular today. Google applications would be a nice place to look. Skype perhaps. Heh, even StackOverflow. :)
If you want intimidating, go to the world of CAD. Check out for example Blender. That's a freeware 3D designer software. Good tool I'm told, but the UI has so many buttons/panels/menus/etc. that it makes baby bunnies cry. Unfortunately I cannot say if this would be a good example of a "bad" UI. 3D designing is a very complex process and all those tools are probably in the right place. But it's definately intimidating. :)
A bad UI design can often be found with propieritary software that comes with propieritary hardware. Unfortunately I cannot give you any examples from the top of my head.
I always tend to design my projects in a way that they're just skeletons which are as extensible as possible. Limiting factors are performance, complexity or Thirdparty-limitations.
This way you could add additional features after finishing the basic structure. A user could also add his needed features.
This probably does not work very good for GUI-applications which should have a good usability without much configuration, but I'm sticking good with this approach for those libs I develop. (They're used by other coders who like to have a highly modifable piece of software)
It's not very hard to develop an application/lib which is bloated with features. But it is to develop an app which could be easily extended by other developers/users to match their own needs.
Develop a wide-ranging plug-in system so you add and take out stuff at any time. Problem solved. If only that was as easy as writing spaghetti code. ;)
For a long time I've been trying different languages to find the feature-set I want and I've not been able to find it. I have languages that fit decently for various projects of mine, but I've come up with an intersection of these languages that will allow me to do 99.9% of my projects in a single language. I want the following:
Built on top of .NET or has a .NET implementation
Has few dependencies on the .NET runtime both at compile-time and runtime (this is important since one of the major use cases is in embedded development where the .NET runtime is completely custom)
Has a compiler that is 100% .NET code with no unmanaged dependencies
Supports arbitrary expression nesting (see below)
Supports custom operator definitions
Supports type inference
Optimizes tail calls
Has explicit immutable/mutable definitions (nicety -- I've come to love this but can live without it)
Supports real macros for strong metaprogramming (absolute must-have)
The primary two languages I've been working with are Boo and Nemerle, but I've also played around with F#.
Main complaints against Nemerle: The compiler has horrid error reporting, the implementation is buggy as hell (compiler and libraries), the macros can only be applied inside a function or as attributes, and it's fairly heavy dependency-wise (although not enough that it's a dealbreaker).
Main complaints against Boo: No arbitrary expression nesting (dealbreaker), macros are difficult to write, no custom operator definition (potential dealbreaker).
Main complaints against F#: Ugly syntax, hard to understand metaprogramming, non-free license (epic dealbreaker).
So the more I think about it, the more I think about developing my own language.
Pros:
Get the exact syntax I want
Get a turnaround time that will be a good deal faster; difficult to quantify, but I wouldn't be surprised to see 1.5x developer productivity, especially due to the test infrastructures this can enable for certain projects
I can easily add custom functionality to the compiler to play nicely with my runtime
I get something that is designed and works exactly the way I want -- as much as this sounds like NIH, this will make my life easier
Cons:
Unless it can get popularity, I will be stuck with the burden of maintenance. I know I can at least get the Nemerle people over, since I think everyone wants something more professional, but it takes a village.
Due to the first con, I'm wary of using it in a professional setting. That said, I'm already using Nemerle and using my own custom modified compiler since they're not maintaining it well at all.
If it doesn't gain popularity, finding developers will be much more difficult, to an extent that Paul Graham might not even condone.
So based on all of this, what's the general consensus -- is this a good idea or a bad idea? And perhaps more helpfully, have I missed any big pros or cons?
Edit: Forgot to add the nesting example -- here's a case in Nemerle:
def foo =
if(bar == 5)
match(baz) { | "foo" => 1 | _ => 0 }
else bar;
Edit #2: Figured it wouldn't hurt to give an example of the type of code that will be converted to this language if it's to exist (S. Lott's answer alone may be enough to scare me away from doing it). The code makes heavy use of custom syntax (opcode, :=, quoteblock, etc), expression nesting, etc. You can check a good example out here: here.
Sadly, there's no metrics or stories around failed languages. Just successful languages. Clearly, the failures outnumber the successes.
What do I base this on? Two common experiences.
Once or twice a year, I have to endure a pitch for a product/language/tool/framework that will Absolutely Change Everything. My answer has been constant for the last 20 or so years. Show me someone who needs support and my company will support them. And that's that. Never hear from them again. Let's say I've heard 25 of these.
Once or twice each year, I have to work with a customer who has orphaned technology. At some point in the past, some clever programming built a tool/framework/library/package that was used internally for several projects. Then that programmer left. No one else can figure that darn thing out, and they want us to replace/rewrite it. Sadly, we can't figure it out either, and our proposal is to rewrite from scratch. And they complain that their genius built the set of apps in a period of weeks, it can't take us months to rewrite them in Java/Python/VB/C#. Let's say I've written 25 or so of these kinds of proposals.
That's just me, one consultant.
Indeed one particularly sad situation was a company who's entire IT software portfolio was written by one clever guy with a private language and tools. He hadn't left, but he'd realized that his language and toolset had fallen way behind the times -- the state of the art had moved on, and he hadn't.
And the move was -- of course -- in an unexpected direction. His language and tools were okay, but the world had started to adopt relational databases, and he had absolutely no way to upgrade his junk to move away from flat files. It was something he had not foreseen. Indeed, it was something he could not possibly foresee. [You won't fall into this trap, will you?]
So, we talked. He rewrote a lot of the applications in Plain-Old VAX Fortran (yes, this is a long time ago.) And he rewrote it to use plain old relational SQL stuff (Ingres, at the time.)
After a year of coding, they were having performance problems. They called me back to review all the great stuff they'd done in replacing the home-built language. Sadly, they'd done the worst possible relational database design. Worst possible. They'd taken their file copies, merges, sorts, and what-not, and implemented each low-level file system operation using SQL, duplicating database rows left, right and center.
He was so mired in his private vision of the perfect language, that he couldn't adapt to a relatively common, pervasive new technology.
I say go for it.
It would be an awesome experience regardless of weather it makes it to production or not.
If you make it compile down to IL then you do not have to worry about not being able to re-use your compiled assemblies with C#
If you believe that you have valid complaints about the languages you listed above, it is likely that many will think like you. Of course, for every 1000 interested person there might be 1 willing to help you maintain it - but that is always the risk
But here are a few things to be cautioned about:
Get your language specification IN STONE before development. Make sure any and all language features are figured out before hand - even things that you may only want in the future. In my opinion, C# is slowly falling into the "oh-just-one-more-language-extension" trap that will lead to its eventual doom.
Be sure to make it optimized. I dont know what you already know; but if you dont know then learn ;) Nobody will want a language that has nice syntax but runs as slow as IE's javascript implementation.
Good luck :D
When I first started my career in the early 90s, there seemed to be this craze of everyone developing their own in-house languages. My first 3 jobs were with companies that had done this. One company had even developed their own operating system!
From experience, I'd say this is a bad idea for the following reasons:
1) You will spend time debugging the language itself in addition to the code base on top of it
2) Any developers you hire will need to go through the learning curve of the language
3) It will be hard to attract and keep developers since working in a proprietary language is a dead-end for someone's career
The main reason I left those three jobs was because they had proprietary languages and you'll notice that not many companies take this route any more :).
An additional argument I'd make is that most languages have entire teams whose full time job it is to develop the language. Maybe you'd be an exception, but I'd be very surprised if you'd be able to match that level of development by only working on the language part-time.
Main complaints against Nemerle: The
compiler has horrid error reporting,
the implementation is buggy as hell
(compiler and libraries), the macros
can only be applied inside a function
or as attributes, and it's fairly
heavy dependency-wise (although not
enough that it's a dealbreaker).
I see your post has been written more than two years ago.
I advise you trying Nemerle language today.
The compiler is stable. There are no blocker bugs for today.
The VS integration has a lot of improvements , also there is SharpDevelop integration.
If you give it a chance, you won't be disappointed.
NEVER EVER develop your own language.
Developing your own language is a fool's trap, and worse it will limit you to what your imagination can provide, as well demanding that you work out both your development environment and the actual programme you're writing.
The cases in which this doesn't apply are pretty much if you're Larry Wall, the AWK guys, or part of a substantial group of people dedicated to testing the boundaries of programming. If you're in any of those categories, you don't need my advice, but I strongly doubt that you're targeting a niche where there is no suitable programming language for the task AND the characteristics of the people doing the task.
If you are as clever as you seem to be (a likely possibility), my advice is to go ahead and do the design of the language first, iterate a couple of times over it, ask some smart fellows you trust in smart programming language related communities about the concrete design you came up with and then take the decision.
You might realize in the process of creating the design that just a quick hack on Nemerle would give it all you need, for example. Many things can happen just when thinking hard about a problem, and the final solution might not be what you actually had in mind when beginning the project.
Worst case scenario, you're stuck with actually implementing the design, but by then you will have it proof read and mature, and you'll know with a high degree of certainty that it was a good path to take.
A related piece of advice, start small, just define the features you absolutely need and then build on them to get the rest.
Writing your own language is not a easy project.. Especially one to be used in any kind of "professional setting"
It is a huge amount of work, and I would doubt you could write your own language, and still write any big projects that use it - you will spend so long adding features that you need, fixing bugs, and general language-design stuff.
I would strongly recommend choosing a language that is closest to what you want, and extending it to do what you need. It'll never be exactly what you want, but compared to the time you'll spend writing your own language, I would say that's a small compromise..
Scala has a .NET compiler. I don't know the status of this though. It's kind of a second class citizen in the Scala world (which is more focused on the JVM). But it might be a good tradeof to adopt the .NET compiler instead of creating a new language from scratch.
Scala is kind of weak in the meta-programming department ATM. It's possible that the need for metaprogramming is somewhat reduced by other language features. In any case I don't think anyone would be sad if you were to implement metaprogramming features for it. Also there is a compiler plug-in infrastructure on the way.
I think most languages will never fit all of the bill.
You might want to combine your 2 favourite languages (in my case C# and Scheme) and use them together.
From a professional point of view, this probably not a good idea though.
It would be interesting to hear some of the things you feel you can't do in existing languages. What kind of projects are you working on that can't be done in C#?
I'm just curios!
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've had several false starts in the past with teaching myself how to program. I've worked through several books (mostly C and Python), and end up just learning the syntax without feeling as though I could sit down and actually write a program for myself. When I try to look through the source trees of a project on Codeplex or Sourceforge, I never seem to know where to start reading the code -- the dependencies seem to go in all directions.
I feel as though I'm not learning programming the way it's done "on the street," so I figured I'd take a different approach to asking how a newbie should learn how to code. If you had to learn programming all over again, what are the things you wouldn't do? What did you spend time doing that you now know wasted you weeks or months?
Where I see beginners wasting weeks or months is typing at the keyboard. The computer is very responsive and will cheerfully chew up hours of your time in the edit-compile-run cycle. If you are learning you will save many hours if
You plan out your design on paper before you approach a computer. It doesn't matter what design method you pick or if you have never heard of a design method. Just write down a plan while your brain is fully engaged and not distracted by the computer.
When code will not compile or will not produce the right answer, if you can't fix it in five minutes, walk away from the computer. Go think about what's happening. Print out your code and scribble on it until you believe it's right.
These are just devices for helping to implement the simple but difficult old advice to think before you code.
When I was learning, I solved countless problems on the 15-minute walk from the computing center to my home. Sadly, with modern PCs we don't get that 15 minutes :-) If you can learn to take it anyway, you will become a better programmer, faster.
I certainly wouldn't start by looking at "real" software projects. Like you say, it's too hard to know where to start. That's largely because large projects are more about their large-scale design than about the individual algorithms or about program flow; for one thing, you're probably looking at a complex GUI application with multi-threading, etc. There isn't really anywhere to "start" looking at the code.
The best way to learn programming is to have a problem you want (need) to solve, and then going about solving it. But most importantly, WRITE CODE. When you read programming books, do ALL the exercises. Make sure you did them right. There's no substitute for writing code. No substitute for screwing up and then fixing it.
Stack Over F.. wait no, heh.
The biggest time-sinks for me are generally in respect to "finding the best answer." I often find that I will run into a problem that I know how to solve but feel that there is a better solution and go on the hunt for it. It is only hours/days later that I come to my senses and realize that I have 7 instances of Firefox, each containing at least 5 tabs sprawled out across 46" of monitor space that I realize that I've been caught in the black hole that is the pursuit of endless knowledge.
My advice to you, and myself for that matter, is to become comfortable with notion of refractoring. Essentially what this means (incase you are are not familiar with the term) is you come up with a solution for a problem and go with it, even if there is quite likely a better way of doing it. Once you have finished the problem, or even the program, you can then revisit your methodology, study it, and figure out where you can make changes to improve it.
This concept has always been hard for me to follow. In college I preferred to to write a paper once, print, and turn it in. Writing code can be thought of very similarly to writing a paper. Simply putting the pen to the pad and pushing out whats on your mind may work - but when you look back over it with a fresh pair of eyes you will, without question, see something you will wish you had done differently.
I just noticed you talked about reading through source trees of other people's projects. Reading other people's code is a wonderful idea, but you must read more selectively. A lot of open-source code is hard to read and not stuff you should emulate anyway. So avoid reading any code that hasn't been recommended by a programmer you respect.
Hint: Jon Bentley, Brian Kernighan, Rob Pike, and P. J. Plauger, who are all programmers I respect, have published a lot of code worth reading. In books.
The only way to learn how to program is to write more code. Reading books is great, but writing / fixing code is the best way to learn. You can't learn anything without doing.
You might also want to look at this book, How to Design Programs, for more of a perspective on design than details of syntax.
The only thing that I did that wasted weeks or months was worry about whether or not my designs were the best way to implement a particular solution. I know now that this is known as "premature optimization" and we all suffer from it to one degree or another. The right way to learn programming is to solve a problem, measure your solution to make sure it performs good enough, then move on to the next problem. After some time you'll have a pile of problems you've solved, but more importantly, you'll know a programming language.
There is excellent advice here, in other posts. Here are my thoughts:
1) Learn to type, the reasons are explained in this article by Steve Yegge. It will help more than you can imagine.
2) Reading code is generally considered a hard task. So, it is better to get an open source project, compile it, and start changing it and learn that way, rather than reading and trying to understand.
I can understand the situation you're in. Reading through books, even many will not make you programmer. What you need to do is START PROGRAMMING.
Actually programming is a lot like swimming in my opinion, even if you know only a little syntax and even lesser amount of coding techniques, start coding anyway. Make a small application, a home inventory, an expense catalog, a datesheet, a cd cataloger, anything you fancy.
The idea is to get into the nitty-gritties of it. Once you start programming you'll run into real-world problems and your problem solving skills will develop as you combat them. That's how you become a better programmer everyday.
So get into the thick of it, and swim right through... That's how you'll make it.
Good luck
I think this question will have wildly different answers for different people.
For myself, I tried C++ at one point (I was about ten and had already been programming for a while), with a click-and-drag UI builder. I think this was a mistake, and I should have gone straight to C and pointers and such. Because I'm just that kind of person.
In your case, it sounds like you want to be led down the right path by someone and feel a bit timid about jumping in and doing something by yourself. (You've read several books and now you're asking what not to do.)
I'll tell you how I learned: by doing plenty of fun, relatively short projects, steadily growing in difficulty. I began with QBasic (which I think is still a great learning tool) and it was there where I developed most of my programming skills. They have of course been expanded and refined since that time but I was already capable of good design back in those days.
The sorts of projects you could take on depend on your interests; if you're mathematically inclined you might want to try a prime number generator or projecting 3D points onto the screen; if you're interested in game design then you could try cloning pong (easy) or minesweeper (harder); or if you're more of a hacker you might want to make a simple chat program or file encryption software.
Work on these projects on your own, and don't worry about whether you're doing things the "right" way. As long as you get it to work, you've learned many things. Some time after you've completed a project you may want to revisit it and try to do it better, or just see how other people have done that sort of thing.
Given the way you seem to want to be led along, perhaps you should find yourself a mentor.
Do not learn how to use pointers and how to manually manage memory. You mentioned C, and I spent plenty of time trying to fix bugs that were caused by mixing *x and &x. This is evil...
Find some problem to solve, write or draw a sketch of an algorithm solving the problem, then try to write it. Either use Python (which is much more friendly for beginners) or use C with statically allocated memory only. And use books/tutorials. They offer multiple excercises with solutions, so you can compare yours with them and see other approaches.
Once you'll feel that you can actually write something simple, see some book/tutorial for Object Oriented Design. It's not the best the world has to offer, but it might turn out to be intuitive. If not, check the functional programming (like LISP, Scheme or Haskell languages), or programming in logic (like Prolog). Maybe those will suit you better.
Also - find some mate. A person you can talk to about coding, code maintenance and design. Such person is worth even more than a book.
To all C fans: The C language is great, really. It allows memory usage optimization to the extent impossible in high-level languages as Python or Ruby. The compiled code is also very fast, and is the only choice for RTOS, or modern 3D games engine. But this is not a good entry point for a beginner, that's what I believe.
Oh, and good luck to you! And don't be ashamed to ask! If you don't ask, the answer is much harder to find.
Assuming you have decent math skills try http://projecteuler.net/ It presents a series of problems to solve of increasing dificulty that should be solvible by writing short programs. This should give you experience in solving specific problems with out getting lost in the details of open source projects.
After basic language syntax, you need to learn design. Which is hard. This book may help.
I think you should stop thinking you've wasted time so far-- instead I think you're education is just incomplete, and you've taken a step you're not really ready for. It sounds like the books you've read are useful, you're learning the intricacies of the language. It sounds like you're just not accustomed to the tools you'd use then to package that code together so it runs.
Some books cover that focus on topics like language syntax, design patterns, algorithms and data structures will never mention the tools you need to actual apply that information. These books are great but if its all you've touched I think it would explain your situation.
What development environment are you using? If you're developing for windows you really should be proficient with creating projects, adding code, running and debugging in Visual Studio. You can download Visual Studio Express for free from Microsoft.
I recommend looking for tutorial like books that actually step you through the UI of development environment you are using. Look for actual screenshots with dropdown menus. Look at what the tutorials walk you through, and if its something you don't know how to do consider buying that book. Preferably it will have code you can copy'n'paste in, not code you write yourself.
I personally don't like these books as I can anticipate how to do new things in VS based on how I'd do other things. But if you're training is incomplete from a tools-usage perspective this could move you in the right direction.
It is probably harder to find these types of tutorial books for Python or C development. There is an overabundance of them for .Net development though.
As someone who has only been working as a programmer for 6 months, I might not be the best person to help you get going, but since it wasn't that long ago when I knew next to nothing, its quite fresh in my mind.
When I started my current job programming wasn't going to be part of my job description but when the opportunity came up to do some programming on the side, I couldn't pass it up.
I spent about 1 month doing tutorials on About.com's Delphi section. As much as people diss about.com, Zarko Gajic's tutorials were simple to understand and easy to follow. Once I had a basic knack of the language and the IDE, I jumped straight into a project exporting accounting data for a program called "Adept". Took me a while but I got there...
The biggest help for me was taking on a personal project. I developed an IRC bot in Java for a crappy 2D game called Soldat. I learnt a lot by planning out and coding my own project.
Now I'm pretty comfortable with Delphi Pascal, SQL, C# and Java. I think, once you get the hang of one OOP language, you can learn the syntax of another language, and it gets a lot easier to catch on.
Perhaps start with a small existing project, and find some thing within it that handles some core part of what it does - then with a debugger, step through it and follow what it's doing from the point where you ask it to do that thing for you.
This helps you in a number of ways. You start to better grasp all of the various things that are touched by the code as it attempts to complete its request. Also, you learn invaluable debugging techniques which it seems like far too many developers lack - while you can often eventually discover what is wrong either with repeated printf() (or equivalent) calls, if you can debug you can solve issues an order of magnitude faster.
I have found that conceptually, a great mental model for understanding programming in the abstract is a pattern of data flow. When a user manipulates data, how is it altered by a program for digestion and storage? How is it transformed to re-present to the user in a form that makes sense to them? Fundamentally code is about transformation of data, and all code can be broken down into constructs of various sizes whose purpose is to alter data in one way or another, bugs forming around the mismatch between what the programmer was expecting from the data, how high level libraries the coder is using treat the data, and how the data actually arrives. Following code with a debugger helps you fully understand this transformation in action by observing changes as they occur.
Standard answer is to make something; picking an easy language to do it in is good, but not essential. It's more the working out stuff in your own head, fixing it because it won't work, that really teaches you. For me, this always happens when I try my eternal dream projects (games) which I never finish but always learn from.
I think the thing I would avoid is learning a language in isolated snippets that don't really hang together but just teach various facets of a particular language. As others have said, the really hard and important thing is to learn design. I think the best way to do this is through a tutorial that walks you through creating an actual application, teaching design along the way. That way you can learn why certain decisions are made and learn how to accomplish what's needed to implement the design choices.
For example, I found Agile Web Development with Rails to be a really easy way to learn Ruby on Rails, much better than simply reading a Ruby manual or even poking my way around scattered web tutorials.
Another thing that I would avoid is developing code in isolation, that is, not having people look at it as I go along. Getting feedback from a mentor will help keep you on the right track with respect to the choices you are making and the correct use of language idioms.
Find a problem in your life or something you do that you just feel could be more efficient and write a small solution to it. It might just be a single script but you will gain much more confidence in your abilities when you start to see useful results of your work. You will also be more motivated to finish it as you are interested in using the solution. Start simple and small and then gradually move up to bigger projects.
And as your working on a small project, focus on building everything with quality. I think this is lost on some programmers who feel that their software is more impressive if it contains a ton of features but usually those features aren't well done or usable. If you focus on building quality solutions to real problems you'll be a fantastic programmer.
Good luck!
Work on projects/problems that you already know how to solve partially
You should read Mike clark's article : How I Learned Ruby. Essentially, he used the test framework for Ruby to exercise different elemnents of the languages.
I used this technique to learn python and it was very, very helpful. Not only did i learn the language, but I was very proficient in the test framework for Python at the end of the excercise. Once you have the basics you can start reading code and then working on building some larger project.