Related
I currently don't know either of the two languages. Design of a piece of software is close to complete.
The intriguing:
Ruby: Enjoyable. Follows thought process. Made for humans.
Go: Good performance. Fast compile times.
I don't know about Ruby's performance. If it's a lot slower than Go, I'll go with the latter (talking about typical speed here).
I'll learn both eventually, but right now, this will determine which one first.
Update: It's a very basic image-editing program. Technical and especially perceived speed should be high. Startup time is especially important.
Sadly, neither language is appropriate for a desktop image editing program.
You haven't told us which desktop you have in mind, I'll assume it's either Windows or Mac.
Ruby is not appropriate because it fails 2 of your requirements:
it has a terrible startup time because at startup it has to initialize a rather complicated VM, which involves loading quite a big part of its standard library
it's very slow (compared to C/Java/Go) doing the kind of computations that image processing entails
Go is statically linked and is compiled to machine code, so its startup time is excellent and the speed is close to C (i.e. it's the fastest language you can hope to choose after C/C++).
However, Go has no support whatsoever for writing Mac desktop apps (i.e. it has no bridge to Objective-C/Cocoa runtime) and the support for writing Windows desktop apps is extremely poor.
If you're doing Windows, the only language that gives you fast startup time is C/C++/Delphi. C# might have acceptable startup time and it's fast enough for the task (very popular paint.net is written in C# and you can find an old version of the code which is BSD-licensed and re-use a lot of its code).
For Mac, I would recommend Objective C - it's the native language of the platform, best documented and with the best, free dev tools (XCode). You can use https://github.com/philippec/Pixen as a starting point.
You really need to give us some idea as to what you consider to be good and bad performance because it's a very subjective subject.
For example, people are usually willing to trade a certain amount of technical or perceived speed for a system that easier to work with or develope. Plus it also matters what you are tying to do. Each language has it's own strengths and weaknesses. Ruby may be faster at some things than Go. Then again, if you really need speed, perhaps you should be looking at a language that is closer to the metal such as C.
Sometimes though, requests for speed from users are subjective too. I once had a system that the users thought was taking too long to do a specific task. There was no way technically to speed it up, so I animated the "Processing ..." window. Because the users could now see something "happening" on the screen, they thought it was going faster. On a stop watch, it actually took a couple of seconds longer.
I think those languages are the worst you can choose for performance-critical application. I don't know much about Go, but Ruby is similar to Python (even slower) and Python is slow as hell. As i've been reading, Go is much faster than Ruby, but still is like two or three times slower compared to other programming languages... It depends on what are you trying to do, of course, ie. I wouldn't choose any of those for real-time physics or something like that.
http://shootout.alioth.debian.org/u32/performance.php?test=nbody
Why is go language so slow?
http://attractivechaos.github.com/plb/
I've been working with python for a couple of years and it's really slow and I'm sure you will hate it and Ruby is very similar to Python and it's slower but as Go is too new I don't really know much about it, I can't tell..
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Opinion-based Update the question so it can be answered with facts and citations by editing this post.
There's a lot of homework questions here on SO.
I would guess that 90%+ can be solved by stepping through the code in a debugger, and observing program/variable state.
I was never taught to use a debugger. I simply printed and read the GDB manual and stepped through their examples. When I used Visual Studio for the first time, I remembered thinking, Wow! how much simpler can this be, click to set a breakpoint, mouse over a variable for the value, press a key to step, the immediate window, debug.print, etc...
At any rate, are students "taught" to use a debugger? If not, why not? (Perhaps a better question is, why can't they learn to use a debugger themselves... maybe they need to be told that there is such a tool that can help them...)
How long does it take to learn to use a debugger?
I don't think the problem is teaching. Using a modern graphical debugger is not rocket science (at least not for most user-mode programs running on a single computer). The problem is with the attitudes of some people. In order to use a debugger effectively, you should:
Admit it's your fault and select isn't broken.
Have the perseverance to spend a couple nights debugging, without forgetting the previous point.
There's no specific algorithm to follow. You should guess educatedly and reason effectively from what you see.
Not many non-programmers have these attitudes. At college, I have seen many friends who give up after a relatively short period of time and bring me some code and tell me the computer is doing something wrong. I usually tell them I trust their computer more than them (and this hurts some feelings, but that's the way it is).
In my high school and university, most of the people in the classes didn't really care about programming at all.
If by students you mean Computer Science students, I think the answer is fairly obvious. The subject matter for courses is generally theory, with the programming language / framework / library there as an aid. The professor can't go very far in depth on a particular tool, since it would take away from time he is teaching networking or systems or whatever. Maybe if there were a course called "Real World Programming" or something like that, they'd cover debuggers, but in general I don't see too much wrong with expecting students to read the language / tool documentation in order to accomplish the coursework.
Debuggers were introduced in my second year Intro to C course, if I recall correctly. Of course the problem most students were struggling with at that point was getting their work to compile, which a debugger will not help with. And once their ten line command line program compiles and then crashes, well, they already have some printfs right there. Fighting to master GDB is overkill.
In my experience, it's fairly rare to actually deal with a code base large enough to make more than a cursory familiarization with a debugger worth the time investment in most Comp. Sci curriculums. The programs are small and the problems you face are more along the lines of figuring out the time-space complexity of your algorithm.
Debuggers become much more valuable on real world projects, where you have a lot of code written by different people at different times to trace through to figure out what keeps frotzing foo before the call to bar().
This is a good question to ask to the faculty at your school.
At my university, they gave a very brief example of debugging, then pointed us to the "help" files and the books.
Perhaps they don't teach it because there is sooo much stuff to cover and so little time for the lecturers. The professors aren't going to hold everybody's hand.
Not entirely related, but people need to use debuggers not just for debugging but to understand working code.
I'll put in a cautionary note on the other side. I learned to program with Visual Basic and Visual C ( mid 80s ), and the debuggers were built-in and easy to use. Too easy, in fact... I generally didn't think about how to solve a problem, I just ran it in the debugger and adjusted the behavior. Oh, that variable is one too high... I must have to subtract one here!
It wasn't until I switched to Linux, with the not-quite-as-easy gcc/gdb combo, that I began to appreciate design and thinking about your code first.
I'll admit, I probably go too far the other way now. I use a debugger to analyze stack traces and that's about it. There should be a middle ground between analyzing the problem and stepping through it in a debugger. Certainly people should be shown all the tools available too them.
I was taught to use a debugger in college. Not much, late (it should be almost the second thing to teach), but they taught me.
Anyway, it's important to teach to DEBUG, not only to "use a debugger". There are situations where you can't debug with gdb (e.g. try to debug a program running 10 concurrent threads) and you need a different approach, like the old-fashioned printf.
I can certainly agree with you that usually one learn and make use of debug techniques much later that the first time you could use them.
From a practicality standpoint, most likely (due to policy or technical restrictions) you cannot use a debugger on a production application. Not using the debugger as too much of a crutch promotes adding the proper amount of logging to your application.
Because there is not text book on debugging, period.
In fact, it is very hard to create a teaching situation where get the incentive to use a debuigger. Typical assignments are too simple to really require a debugger. Greg Wilson raised that topic at last year's SUITE workshop, and consensus was it is very hard to get students to use a debugger. You can tell them about debugging, but creating situation where they will actually feel the pain of resorting to the debugger is hard.
Maybe a lecture on game cracking could motive students to step through the game with a debugger? At least, that was why I told myself how to use a debugger as a 12-year old :)
A found that there is a lot of negative attitude towards debuggers amongst academia and seasoned systems programmers. I have come up against one quite talented programmer who claimed that "Debuggers don't work, i just use log files." Fair enough, for multi-threaded server apps you must have logging, but there's no denying a debugger is useful for 99% of the code that is not multi-threaded.
In answering your question, yes debuggers should be covered in programming syllabus, but as one of the tools of debugging a program. Tracing and logging are important as well.
Having recruited at 4 universities (Rensselaer, Purdue, Ohio State, University of Washington) - students that write code for money for incubators associated with their university tend to learn the art of debugging really well because the incubation companies want people to solve problems and do it in fewer hours and invest some time to teach them to use good debugging techniques. Depending on the sophistication of the particular incubator company they might invest in mentoring patterns and performance to help the student be more productive for them but often debugging is the first investment.
Left to the traditional cs classes the students don't seem to walk away with the same set of skills that help them narrow a problem, manipulate the data while the program/service/page/site/component is running andreally understand the implications of what they've written vs. what they needed to write to make it right.
I went to Rensselaer and I 'learned on my own' because I was paid flat rate for some projects and I wanted to minimize my own time spent on programming - and it was further enforced by working as an intern #Microsoft in 1994 where I got to see how useful an Integrated Dev Environment really was.
Wagering a hypothesis based on my experience as a TA and aspiring CS prof:
It would actually confuse the kids who have little to no programming experience more than it would help.
Now first of all: I completely agree that teaching how to use a debugger would be a great thing, but I think the barrier to doing so stems from the greater, systematic problem that software engineering and computer science are not separate majors. Most CS programs will require 2-4 classes where learning to code is the focus. After these, coding ability is required but not the topic of the class.
To my main point: It's very hard to teach something using the guise of "you don't get this now but do it because it'll be useful later." You can try, but I don't think it really works. This as an extension of the idea that people only really learn from doing. Going through the motions but not understanding why is not the same as doing.
I think kids learning to code for the first time aren't going to understand why using a debugger is more effective than inserting print lines. Think about a small to medium sized script you code: would you use the debugger barring odd behavior or some bug you couldn't work out quickly? I wouldn't, seems like it would just slow me down. But, when it comes to maintaining the huge project I work on every day, the debugger is invaluable beyond a doubt. By the time students get to the portion of the curriculum that requires big projects, they're not in a class that focuses on general coding anymore.
And all of this brings me to my awesome idea I think every CS prof should do when teaching how to code: instead of exclusively asking for projects from the kids, every now and then give them a big piece of complex code and ask them to fix the bugs. This would teach them how to use a debugger
In high school we were taught to debug by writing stuff out to the console.
In college, we were taught a mix of that plus using a debugger.
The tools have only gotten easier to use, so I am really not sure why it is not taught.
I was taught in my first CS class how to use a debugger. It didn't do me much good to be setting breakpoints and stepping through my code when most of what I wrote was "Hello World!". I pretty much ignored the debugger from that point on until I learned to use GDB in a much more advanced course while working on a "binary bomb" homework assignment.
Since I've been out of school and working I've spent a lot more time using a debugger and learning how useful it can be. I would say that learning to use a debugger includes three things - a need for one, being taught/learning how to use one, and experience knowing how to use one to your advantage.
Also, spending some time learning "echo debugging" can be worthwhile for those situations when a debugger isn't available/necessary. That's just my $0.02.
There are more than a few questions here, to my mind. Here's a few that are asked and a few that I'd infer:
I was taught in BASIC and Pascal initially, usually with an interpreter that made it easier to run the program till something blew up. We didn't have breakpoints or many of the fancy things there are now for tracing through code, though this would have been from 1983-1994 using a Commodore 64, Watcom BASIC, and Pascal on a Mac.
Even in my later university years, we didn't have a debugger. If our code didn't work, we had print statements or do manual tracing, in terms of time this would have been 1995-1997.
One cavaet with a debugger is that for something like Visual Studio, do you have any idea how long it could take to go through every feature it has for debugging? That could take years in some cases I think. This is without getting into all the build options and other things that it can do that one might use eventually. Another point is that for all the good things that a debugger gives, there is something to be said for how complex things can get,e.g. using a breakpoint in VS there is the call stack, local variables, watch windows, memory, disassembly and other things that one could want to examine while execution is halted.
The basics of using a debugger could be learned in a week or so, I think. However, to get to the point of mastering what a debugger does, how much goes on when code is executing as well as where it is executing as there are multiple places where things can run these days like GPUs to go along with the CPU, would take a lot longer and I'd question how many people have that kind of drive, even in school.
You're question is kind of similar to, "Why aren't students taught software testing"? I'm sure the do in some places, but typically Universities/Colleges stick to teaching the 'interesting' theoretical computer science stuff, and tend not to teach the practical tools. Like how if you're taking English in school they teach you how to write, not how to use MS Word (yea I'm sure there are some Word courses, but you get my point).
I wasn't taught to use a debugger in my undergraduate degree, because you cannot use a debugger on a deck of punch cards. Even print statements are a luxury if you have a 15 minute turnaround on "jobs", etcetera.
I'm not saying that people should not be taught to use debuggers. Just that it is also important to learn to debug without this aid, because:
it will help you understand your code better if you don't have to rely on a debugger, and
there are situations where a sophisticated debugger won't be available.
On the latter point, I can also remember debugging a boot prom on an embedded device using a (rather expensive) logic analyzer to capture what was happening on the address / data lines.
The same reason students aren't taught version control, or unit testing, or shell scripting, or text editing, or documentation writing, or even (beyond intro courses) programming languages. The class is about computer science, usually a single concept or family of concepts, not programming. You're expected to learn what you need.
This isn't unique to computer science. My chemistry classes (I also have a chemistry degree) didn't teach me how to use any chemistry lab equipment, either. You learned that by hanging around in the lab and watching other students and asking the grizzled old profs who hung out there.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I realize that the question is likely o get a lot of "it depends", but I am curious anyway. When you hire somebody new (but experienced) to the team, and they don't have expertise in technology you are using, but know something similar, how much time do you budget for them to "get online."
I am talking about something fairly substantial, like a language, or a framework / product that has a lot of ways of doing things. Obviously, many libraries takes very little time to start using.
In my own experience (10 years of experience, including a fair amount of consulting, so learning new technologies is par for the course), it takes me about three to six months of experience to become proficient at a new technology, and about a year to feel like I am approaching expert level where I know all the basics and medium-difficulty issues, along with a few areas very well.
What do you do in your projects? How do you budget the time to account for learning.
It doesn't only depend on the individual involved -- it crucially depends on the specific technology as well as the individual's background; certain technologies, esp. languages, are just harder and slower to get into. I've seen world-class Java gurus with zero previous exposure to C++ take many months, say on the order of six or so, to be fully productive in C++; vice versa (world-class C++ guru with zero previous exposure to Java) I've seen take about 2-3 months; again for extremely experienced and skilled programmers with no previous exposure to dynamic languages, being fully productive in Python can be expected to take 3-4 weeks. In each case I'm talking about 100% full-time involvement in the relevant technology, by a programmer in the world's top one percent in terms of skill and experience, within a team having several other programmers of that caliber who also gurus in the specific language in use.
Factors that can shorten the time are previous exposure to "similar" languages/technologies, e.g. a solid background in C makes C++ slightly faster to learn, solid background in C# helps with Java, solid background in Ruby or Perl helps with Python. Factors that can lengthen the time include lack of suitably experienced teammates, not being 100%-immersed in the "new thing", and psychological resistance (not really wanting to do it with all one's heart!-).
I've focused on programming languages for my examples, but some technologies can be even harder, i.e., take longer to master -- if you've never written embedded real-hard-time programs (no dynamic allocation of memory allowed, proofs of upper bound on response time required of all function) even six months might not suffice; some application areas require mastery of application domains that, all on their own, can take even longer (if to understand at all what's going on, and therefore be fully productive, you need the equivalent of a BSc in Psychology, or deep knowledge of the Law, or a CPA's qualifications, etc, well, each of those takes years on its own!).
I don't think the language as such is the issue, rather the programming paradigm it encompasses.
e.g. earlier this year I tried C#, coming from a Java perspective. That was all very straightforward. However, I'm now trying Scala. Because of the functional aspect, I expect to be learning and honing my skills for a lot longer (you can write Scala in an imperative fashion, but you don't leverage its strengths doing that).
I suspect the same would apply when (say) migrating from a relational database to an OO database, vs. a MS-SQL/Oracle migration.
It does depend, mainly on how closely the language resembles a language they already know, as well as individual abilities at picking up new things. Moving between similar languages like C++, Java, and C# is very easy. Similarly, moving from (say) Win32 to MFC to .net is going to be easier than from MFC to MacOS.
Moving from C to C++ is likely to take longer, as the programmer has to learn OO methodologies. Moving from C++ to Perl or ML could take a lot longer!
However, you usually don't need to know much to get started. Moving from C++ to C# can be done in a few hours reading (on the main differences) and then you can start writing (or modifying existing) code. That's because (a) you already know how to do OO programming, and (b) 95% of the syntax is identical.
But the main thing it depends on is your definition of "proficient". With similar languages, you will be able to write good code within a few days (an algorithm is usually failry language independent), but it usually takes months or years to become truly "proficient" in a language or large library.
So I'd say as a rule of thumb, "up to (a reasonable) speed" in a few weeks, but you might see silly "mistakes" or inefficiencies in their code for months/years until they learn all the little tricks of the language.
In the case of people learning OO, usually it seems to take a few days to get the basic concepts, and then at about the 2 year mark, a moment of epiphany occurs where the programmer suddenly relises that they truly "get" it. (I guess this is when your brain starts thinking fluently in OO rather than trying to think procedurally and then translate that into an OO aproach)
In our environment (US health care revenue cycle) it is more than just learning and becoming proficient in the language or technology stack we use to deliver our solutions to our customers. The developer also has to understand the problem domain. We work with entities that often don't document the behaviors of their systems well-enough for external entities (us) to communicate with to get the data that our customers we want. Our developers are forced to think beyond the specs to build a functioning system.
There is also the inevitable "It doesn't work; fix it" problem report from the customer support staff. Frequently the problem isn't a defect in our software; it is an issue with other entities with which our software communicates. Our developers have to be able to identify (and sometimes prove) that it isn't our software so that our business analyst-types can go to that other entity and explain the issue in a way that will get them to resolve the problem.
You were expecting this answer but it all depends on the person/programmer. I have been in a situation where two equally skilled programmers had to pick up something new, one got it right away, while the other one took some time. Previous exposures to other technologies are also a factor.
Personally, in regard to something new, I budget my time to learning everything about it every chance I get. It would take about 6 months to fully be comfortable.
Hope this helps.
I am talking about something fairly substantial, like a language, or a framework / product
that has a lot of ways of doing things. Obviously, many libraries takes very little time to start using.
When you hire somebody new (but experienced) to the team, and they don't have expertise in technology you
are using, but know something similar, how much time do you budget for them to "get online."
Twenty-three work days, six hours, forty-three minutes, and seventeen point nine seconds.
What do you do in your projects? How do you budget the time to account for learning.
I think these questions are better!
Try to find an easy project in the new technology, and have them do that. If possible, have the person start by fixing bugs, then adding small features.
Learning is incremental. One can continue learning details of, say, C++ syntax throughout one's life. When one is an "expert" in a topic, it just means that the gains from learning more in that topic are growing smaller.
+1 for it depends.
It depends on such things as
the attitude and capabilities of the person learning it
is the problem area programming/paradigm well understood by that person
the similarity of the new technology / language to other technologies he/she does know
the consistency of the new technology / language in its interface (API, grammar, etc...)
what is proficient (knowing just the language, or als the basic library, or also runtime behaviour (interactions with underlying technology))
Having said that, in my experience a smart person learning a new language/technology will quickly be more productive than other people with more experience in that language/technology.
See Peter Norvig's Teach Yourself Programming in Ten Years for the related question of how long does it take to become proficient in programming.
It so completely depends on whether you already know languages that are similar to the new one, and know something about the problem domain the new language is suited for. I'd say don't expect to be reasonably proficient in less than 3-6 months, but again, it depends.
To take one example I implemented a PHP/MySQL web application a couple years ago (total effort was about 6 months). It was my first reasonably large web application, and my first PHP ever. I've used relational databases, but this was also my first exposure to MySQL. MySQL came very quickly, as expected, since it's really only a dialect of a language I knew well. What surprised me was that PHP also came quickly. I realized that not only did it borrow ideas from PERL and C/C++, but the whole paradigm of coding with integrated SQL statements strongly drew on some experience I had in the 90's with, of all things, Informix 4GL.
At the other end of the spectrum, I've never really learned a functional language, so I'm trying to pick up Scala. This is going to take substantially longer, and there'll be a long period where my Scala will feel like Java in disguise, and not be that functional.
So ... it depends! ;-)
I agree that it depends.
You also run the risk that if the person knows one technology/paradigm, they will code in the new language/technology using the old practices/paradigms.
For example, I picked up Python really fast (I'm a Java/C++ guy), but it took a long time since I stopped writing Java style code in Python and started thinking functionally.
To get really good, I think there's no replacement for experience. For instance, I'm sure I can easily pick up J2EE, but the experience to built up the best enterprise systems is not something you can pick up that fast.
I came across this article about programming styles, seen by Edsger Dijsktra. To quickly paraphrase, the main difference is Mozart, when the analogy is made to programming, fully understood (debatable) the problem before writing anything, while Beethoven made his decisions as he wrote the notes out on paper, creating many revisions along the way. With Mozart programming, version 1.0 would be the only version for software that should aim to work with no errors and maximum efficiency. Also, Dijkstra says software not at that level of refinement and stability should not be released to the public.
Based on his views, two questions. Is Mozart programming even possible? Would the software we write today really benefit if we adopted the Mozart style instead?
My thoughts. It seems, to address the increasing complexity of software, we've moved on from this method to things like agile development, public beta testing, and constant revisions, methods that define web development, where speed matters most. But when I think of all the revisions web software can go through, especially during maintenance, when often patches are applied over patches, to then be refined through a tedious refactoring process—the Mozart way seems very attractive. It would at least lessen those annoying software updates, e.g. Digsby, Windows, iTunes, etc., many the result of unforeseen vulnerabilities that require a new and immediate release.
Edit: Refer to the response below for a more accurate explanation of Dijsktra's views.
The Mozart programming style is a complete myth (everybody has to edit and modify their initial efforts), and although "Mozart" is essentially a metaphor in this example, it's worth noting that Mozart was substantially a myth himself.
Mozart was a supposed magical child prodigy who composed his first sonata at 4 (he was actually 6, and it sucked - you won't ever hear it performed anywhere). It's rarely mentioned, of course, that his father was considered Europe's greatest music teacher, and that he forced all of his children to practice playing and composition for hours each day as soon as they could pick up an instrument or a pen.
Mozart himself was careful to perpetuate the illusion that his music emerged whole from his mind by destroying most of his drafts, although enough survive to show that he was an editor like everyone else. Beethoven was just more honest about the process (maybe because he was deaf and couldn't tell if anyone was sneaking up on him anyway).
I won't even mention the theory that Mozart got his melodies from listening to songbirds. Or the fact that he created a system that used dice to randomly generate music (which is actually pretty cool, but might also explain how much of Mozart's music appeared to come from nowhere).
The moral of the story is: don't believe the hype. Programming is work, followed by more work to fix the mistakes you made the first time around, followed by more work to fix the mistakes you made the second time around, and so on and so forth until you die.
It doesn't scale.
I can figure out a line of code in my head, a routine, and even a small program. But a medium program? There are probably some guys that can do it, but how many, and how much do they cost? And should they really write the next payroll program? That's like wasting Mozart on muzak.
Now, try to imagine a team of Mozarts. Just for a few seconds.
Still it is a powerful instrument. If you can figure out a whole line in your head, do it. If you can figure out a small routine with all its funny cases, do it.
On the surface, it avoids going back to the drawing board because you didn't think of one edge case that requires a completely different interface altogether.
The deeper meaning (head fake?) can be explained by learning another human language. For a long time you thinking which words represent your thoughts, and how to order them into a valid sentence - that transcription costs a lot of foreground cycles.
One day you will notice the liberating feeling that you just talk. It may feel like "thinking in a foregin language", or as if "the words come naturally". You will sometimes stumble, looking for a particular word or idiom, but most of the time translation runs in the vast ressources of the "subconcious CPU".
The "high goal" is developing a mental model of the solution that is (mostly) independent of the implementation language, to separate solution of a problem from transcribing the problem. Transcription is easy, repetetive and easily trained, and abstract solutions can be reused.
I have no idea how this could be taught, but "figuring out as much as possible before you start to write it" sounds like good programming practice towards that goal.
A classic story from Usenet, about a true programming Mozart.
Real Programmers write in Fortran.
Maybe they do now, in this decadent
era of Lite beer, hand calculators and
"user-friendly" software but back in
the Good Old Days, when the term
"software" sounded funny and Real
Computers were made out of drums and
vacuum tubes, Real Programmers wrote
in machine code. Not Fortran. Not
RATFOR. Not, even, assembly language.
Machine Code. Raw, unadorned,
inscrutable hexadecimal numbers.
Directly.
Lest a whole new generation of
programmers grow up in ignorance of
this glorious past, I feel duty-bound
to describe, as best I can through the
generation gap, how a Real Programmer
wrote code. I'll call him Mel, because
that was his name.
I first met Mel when I went to work
for Royal McBee Computer Corp., a
now-defunct subsidiary of the
typewriter company. The firm
manufactured the LGP-30, a small,
cheap (by the standards of the day)
drum-memory computer, and had just
started to manufacture the RPC-4000, a
much-improved, bigger, better, faster
-- drum-memory computer. Cores cost too much, and weren't here to stay,
anyway. (That's why you haven't heard
of the company, or the computer.)
I had been hired to write a Fortran
compiler for this new marvel and Mel
was my guide to its wonders. Mel
didn't approve of compilers.
"If a program can't rewrite its own
code," he asked, "what good is it?"
Mel had written, in hexadecimal, the
most popular computer program the
company owned. It ran on the LGP-30
and played blackjack with potential
customers at computer shows. Its
effect was always dramatic. The LGP-30
booth was packed at every show, and
the IBM salesmen stood around talking
to each other. Whether or not this
actually sold computers was a question
we never discussed.
Mel's job was to re-write the
blackjack program for the RPC-4000.
(Port? What does that mean?) The new
computer had a one-plus-one addressing
scheme, in which each machine
instruction, in addition to the
operation code and the address of the
needed operand, had a second address
that indicated where, on the revolving
drum, the next instruction was
located. In modern parlance, every
single instruction was followed by a
GO TO! Put that in Pascal's pipe and
smoke it.
Mel loved the RPC-4000 because he
could optimize his code: that is,
locate instructions on the drum so
that just as one finished its job, the
next would be just arriving at the
"read head" and available for
immediate execution. There was a
program to do that job, an "optimizing
assembler", but Mel refused to use it.
"You never know where it's going to
put things", he explained, "so you'd
have to use separate constants".
It was a long time before I understood
that remark. Since Mel knew the
numerical value of every operation
code, and assigned his own drum
addresses, every instruction he wrote
could also be considered a numerical
constant. He could pick up an earlier
"add" instruction, say, and multiply
by it, if it had the right numeric
value. His code was not easy for
someone else to modify.
I compared Mel's hand-optimized
programs with the same code massaged
by the optimizing assembler program,
and Mel's always ran faster. That was
because the "top-down" method of
program design hadn't been invented
yet, and Mel wouldn't have used it
anyway. He wrote the innermost parts
of his program loops first, so they
would get first choice of the optimum
address locations on the drum. The
optimizing assembler wasn't smart
enough to do it that way.
Mel never wrote time-delay loops,
either, even when the balky
Flexowriter required a delay between
output characters to work right. He
just located instructions on the drum
so each successive one was just past
the read head when it was needed; the
drum had to execute another complete
revolution to find the next
instruction. He coined an
unforgettable term for this procedure.
Although "optimum" is an absolute
term, like "unique", it became common
verbal practice to make it relative:
"not quite optimum" or "less optimum"
or "not very optimum". Mel called the
maximum time-delay locations the "most
pessimum".
After he finished the blackjack
program and got it to run, ("Even the
initializer is optimized", he said
proudly) he got a Change Request from
the sales department. The program used
an elegant (optimized) random number
generator to shuffle the "cards" and
deal from the "deck", and some of the
salesmen felt it was too fair, since
sometimes the customers lost. They
wanted Mel to modify the program so,
at the setting of a sense switch on
the console, they could change the
odds and let the customer win.
Mel balked. He felt this was patently
dishonest, which it was, and that it
impinged on his personal integrity as
a programmer, which it did, so he
refused to do it. The Head Salesman
talked to Mel, as did the Big Boss
and, at the boss's urging, a few
Fellow Programmers. Mel finally gave
in and wrote the code, but he got the
test backwards, and, when the sense
switch was turned on, the program
would cheat, winning every time. Mel
was delighted with this, claiming his
subconscious was uncontrollably
ethical, and adamantly refused to fix
it.
After Mel had left the company for
greener pa$ture$, the Big Boss asked
me to look at the code and see if I
could find the test and reverse it.
Somewhat reluctantly, I agreed to
look. Tracking Mel's code was a real
adventure.
I have often felt that programming is
an art form, whose real value can only
be appreciated by another versed in
the same arcane art; there are lovely
gems and brilliant coups hidden from
human view and admiration, sometimes
forever, by the very nature of the
process. You can learn a lot about an
individual just by reading through his
code, even in hexadecimal. Mel was, I
think, an unsung genius.
Perhaps my greatest shock came when I
found an innocent loop that had no
test in it. No test. None. Common
sense said it had to be a closed loop,
where the program would circle,
forever, endlessly. Program control
passed right through it, however, and
safely out the other side. It took me
two weeks to figure it out.
The RPC-4000 computer had a really
modern facility called an index
register. It allowed the programmer to
write a program loop that used an
indexed instruction inside; each time
through, the number in the index
register was added to the address of
that instruction, so it would refer to
the next datum in a series. He had
only to increment the index register
each time through. Mel never used it.
Instead, he would pull the instruction
into a machine register, add one to
its address, and store it back. He
would then execute the modified
instruction right from the register.
The loop was written so this
additional execution time was taken
into account -- just as this
instruction finished, the next one was
right under the drum's read head,
ready to go. But the loop had no test
in it.
The vital clue came when I noticed the
index register bit, the bit that lay
between the address and the operation
code in the instruction word, was
turned on-- yet Mel never used the
index register, leaving it zero all
the time. When the light went on it
nearly blinded me.
He had located the data he was working
on near the top of memory -- the
largest locations the instructions
could address -- so, after the last
datum was handled, incrementing the
instruction address would make it
overflow. The carry would add one to
the operation code, changing it to the
next one in the instruction set: a
jump instruction. Sure enough, the
next program instruction was in
address location zero, and the program
went happily on its way.
I haven't kept in touch with Mel, so I
don't know if he ever gave in to the
flood of change that has washed over
programming techniques since those
long-gone days. I like to think he
didn't. In any event, I was impressed
enough that I quit looking for the
offending test, telling the Big Boss I
couldn't find it. He didn't seem
surprised.
When I left the company, the blackjack
program would still cheat if you
turned on the right sense switch, and
I think that's how it should be. I
didn't feel comfortable hacking up the
code of a Real Programmer.
Edsger Dijkstra discusses his views on Mozart vs Beethoven programming in this YouTube video entitled "Discipline in Thought".
People in this thread have pretty much discussed how Dikstra's views are impractical. I'm going to try and defend him some.
Dijkstra is against companies
essentially "testing" their software
on their customers. Releasing
version 1.0 and then immediately
patch 1.1. He felt that the program
should be polished to a degree that
"hotfix" patches are borderline
unethical.
He did not think that software should be written in one fell swoop or that changes would never need to be made. He often discusses his design ideals, one of them being modularity and ease of change. He often thought that individual algorithms should be written in this way however, after you have completely understood the problem. That was part of his discipline.
He found after all his extensive experience with programmers, that programmers aren't happy unless they are pushing the limits of their knowledge. He said that programmers didn't want to program something they completely and 100% understood because there was no challenge in it. Programmers always wanted to be on the brink of their knowledge. While he understood why programmers are like that he stated that it wasn't representative of low-error tolerance programming.
There are some industries or applications of programming that I believe Dijkstra's "discipline" are warranted as well. NASA Rovers, Health Industry embedded devices (ie dispense medication, etc), certain Financial software that transfer our money. These areas don't have the luxuries of incremental change after release and a more "Mozart Approach" is necessary.
I think the Mozart story confuses what gets shipped versus how it is developed. Beethoven did not beta-test his symphonies on the public. (It would be interesting to see how much he changed any of the scores after the first public performance.)
I also don't think that Dijkstra was insisting that it all be done in your head. After all, he wrote books on disciplined programming that involved working it out on paper, and to the same extent that he wanted to see mathematical-quality discipline, have you noticed how much paper and chalk board mathematicians may consume while working on a problem?
I favor Simucal's response, but I think the Mozart-Beethoven metaphor should be discarded. That shoe-horns Dijkstra's insistence on discipline and understanding into a corner where it really doesn't belong.
Additional Remarks:
The TV popularization is not so hot, and it confuses some things about musical composition and what a composer is doing and what a programmer is doing. In Dijkstra's own words, from his 1972 Turing Award Lecture: "We must not forget that it is not our business to make programs; it is our business to design classes of computations that will display a desired behavior." A composer may be out to discover the desired behavior.
Also, in Dijkstra's notion that version 1.0 should be the final version, we too easily confuse how desired behavior and functionality evolve over time. I believe he oversimplifies in thinking that all future versions are because the first one was not thought out and done rigorously and reliably.
Even without time-to-market urgency, I think we now understand much better that important kinds of software evolve along with the users experience with it and the utilitarian purpose they have for it. Obvious counter-examples are games (also consider how theatrical motion pictures are developed). Do you think Beethoven could have written Symphony No. 9 without all of his preceding experience and exploration? Do you think the audience could have heard it for what it was? Should he have waited until he had the perfect Sonata? I'm sure Dijkstra doesn't propose this, but I do think he goes too far with Mozart-Beethoven to make his point.
In addition, consider chess-playing software. The new versions are not because the previous ones didn't play correctly. It is about exploiting advances in chess-playing heuristics and the available computer power. For this and many other situations, the idea that version 1.0 be the final version is off base. I understand that he is rightfully objecting to the release of known-to-be unreliable and maybe impaired software with deficiencies to be made up in maintenance and future releases. But the Mozartian counter-argument doesn't hold up for me.
So, did Dijkstra continue to drive the first automobile he purchased, or clones of exactly that automobile? Maybe there is planned obsolescence, but a lot of it has to do with improvements and reliability that could not have possibly been available or even considered in previous generations of automotive technology.
I am a big Dijkstra fan, but I think the Mozart-Beethoven thing is way too simplistic as well as inappropriate. I am a big Beethoven fan too.
I think it's possible to appear to employ Mozart programming. I know of one company, Blizzard, that doesn't release a software product until they're good and ready. This doesn't mean that Diablo 3 will spring whole and complete from someone's mind in one session of dazzlingly brilliant coding. It does mean that that's how it will appear to the rest of us. Blizzard will test the heck out of their game internally, not showing it to the rest of the world until they've got all the kinks worked out. Most companies don't take this approach, preferring instead to release software when it's good enough to solve a problem, then fix bugs and add features as they come up. This approach works (to varying degrees) for most companies.
Well, we can't all be as good as Mozart, can we? Perhaps Beethoven programming is easier.
If Apple adopted "Mozart" programming, there would be no Mac OS X or iTunes today.
If Google adopted "Mozart" programming, there would be no Gmail or Google Reader.
If SO developers adopted "Mozart" programming, there would be no SO today.
If Microsoft adopted "Mozart" programming, there would be no Windows today (well, I think that would be good).
So the answer is simply NO. Nothing is perfect, and nothing is ever meant to be perfect, and that includes software.
I think the idea is to plan ahead. You need to at least have some kind of outline of what you are trying to do and how you plan to get there. If you just sit down at the keyboard and hope "the muse" will lead you to where your program needs to go, the results are liable to be rather uneven, and it will take you much longer to get there.
This is true with any kind of writing. Very few authors just sit down at a typewriter with no ideas and start banging away until a bestselling novel is produced. Heck, my father-in-law (a high school English teacher) actually writes outlines for his letters.
Progress in computing is worth a sacrifice in glory or genius here and there.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've had several false starts in the past with teaching myself how to program. I've worked through several books (mostly C and Python), and end up just learning the syntax without feeling as though I could sit down and actually write a program for myself. When I try to look through the source trees of a project on Codeplex or Sourceforge, I never seem to know where to start reading the code -- the dependencies seem to go in all directions.
I feel as though I'm not learning programming the way it's done "on the street," so I figured I'd take a different approach to asking how a newbie should learn how to code. If you had to learn programming all over again, what are the things you wouldn't do? What did you spend time doing that you now know wasted you weeks or months?
Where I see beginners wasting weeks or months is typing at the keyboard. The computer is very responsive and will cheerfully chew up hours of your time in the edit-compile-run cycle. If you are learning you will save many hours if
You plan out your design on paper before you approach a computer. It doesn't matter what design method you pick or if you have never heard of a design method. Just write down a plan while your brain is fully engaged and not distracted by the computer.
When code will not compile or will not produce the right answer, if you can't fix it in five minutes, walk away from the computer. Go think about what's happening. Print out your code and scribble on it until you believe it's right.
These are just devices for helping to implement the simple but difficult old advice to think before you code.
When I was learning, I solved countless problems on the 15-minute walk from the computing center to my home. Sadly, with modern PCs we don't get that 15 minutes :-) If you can learn to take it anyway, you will become a better programmer, faster.
I certainly wouldn't start by looking at "real" software projects. Like you say, it's too hard to know where to start. That's largely because large projects are more about their large-scale design than about the individual algorithms or about program flow; for one thing, you're probably looking at a complex GUI application with multi-threading, etc. There isn't really anywhere to "start" looking at the code.
The best way to learn programming is to have a problem you want (need) to solve, and then going about solving it. But most importantly, WRITE CODE. When you read programming books, do ALL the exercises. Make sure you did them right. There's no substitute for writing code. No substitute for screwing up and then fixing it.
Stack Over F.. wait no, heh.
The biggest time-sinks for me are generally in respect to "finding the best answer." I often find that I will run into a problem that I know how to solve but feel that there is a better solution and go on the hunt for it. It is only hours/days later that I come to my senses and realize that I have 7 instances of Firefox, each containing at least 5 tabs sprawled out across 46" of monitor space that I realize that I've been caught in the black hole that is the pursuit of endless knowledge.
My advice to you, and myself for that matter, is to become comfortable with notion of refractoring. Essentially what this means (incase you are are not familiar with the term) is you come up with a solution for a problem and go with it, even if there is quite likely a better way of doing it. Once you have finished the problem, or even the program, you can then revisit your methodology, study it, and figure out where you can make changes to improve it.
This concept has always been hard for me to follow. In college I preferred to to write a paper once, print, and turn it in. Writing code can be thought of very similarly to writing a paper. Simply putting the pen to the pad and pushing out whats on your mind may work - but when you look back over it with a fresh pair of eyes you will, without question, see something you will wish you had done differently.
I just noticed you talked about reading through source trees of other people's projects. Reading other people's code is a wonderful idea, but you must read more selectively. A lot of open-source code is hard to read and not stuff you should emulate anyway. So avoid reading any code that hasn't been recommended by a programmer you respect.
Hint: Jon Bentley, Brian Kernighan, Rob Pike, and P. J. Plauger, who are all programmers I respect, have published a lot of code worth reading. In books.
The only way to learn how to program is to write more code. Reading books is great, but writing / fixing code is the best way to learn. You can't learn anything without doing.
You might also want to look at this book, How to Design Programs, for more of a perspective on design than details of syntax.
The only thing that I did that wasted weeks or months was worry about whether or not my designs were the best way to implement a particular solution. I know now that this is known as "premature optimization" and we all suffer from it to one degree or another. The right way to learn programming is to solve a problem, measure your solution to make sure it performs good enough, then move on to the next problem. After some time you'll have a pile of problems you've solved, but more importantly, you'll know a programming language.
There is excellent advice here, in other posts. Here are my thoughts:
1) Learn to type, the reasons are explained in this article by Steve Yegge. It will help more than you can imagine.
2) Reading code is generally considered a hard task. So, it is better to get an open source project, compile it, and start changing it and learn that way, rather than reading and trying to understand.
I can understand the situation you're in. Reading through books, even many will not make you programmer. What you need to do is START PROGRAMMING.
Actually programming is a lot like swimming in my opinion, even if you know only a little syntax and even lesser amount of coding techniques, start coding anyway. Make a small application, a home inventory, an expense catalog, a datesheet, a cd cataloger, anything you fancy.
The idea is to get into the nitty-gritties of it. Once you start programming you'll run into real-world problems and your problem solving skills will develop as you combat them. That's how you become a better programmer everyday.
So get into the thick of it, and swim right through... That's how you'll make it.
Good luck
I think this question will have wildly different answers for different people.
For myself, I tried C++ at one point (I was about ten and had already been programming for a while), with a click-and-drag UI builder. I think this was a mistake, and I should have gone straight to C and pointers and such. Because I'm just that kind of person.
In your case, it sounds like you want to be led down the right path by someone and feel a bit timid about jumping in and doing something by yourself. (You've read several books and now you're asking what not to do.)
I'll tell you how I learned: by doing plenty of fun, relatively short projects, steadily growing in difficulty. I began with QBasic (which I think is still a great learning tool) and it was there where I developed most of my programming skills. They have of course been expanded and refined since that time but I was already capable of good design back in those days.
The sorts of projects you could take on depend on your interests; if you're mathematically inclined you might want to try a prime number generator or projecting 3D points onto the screen; if you're interested in game design then you could try cloning pong (easy) or minesweeper (harder); or if you're more of a hacker you might want to make a simple chat program or file encryption software.
Work on these projects on your own, and don't worry about whether you're doing things the "right" way. As long as you get it to work, you've learned many things. Some time after you've completed a project you may want to revisit it and try to do it better, or just see how other people have done that sort of thing.
Given the way you seem to want to be led along, perhaps you should find yourself a mentor.
Do not learn how to use pointers and how to manually manage memory. You mentioned C, and I spent plenty of time trying to fix bugs that were caused by mixing *x and &x. This is evil...
Find some problem to solve, write or draw a sketch of an algorithm solving the problem, then try to write it. Either use Python (which is much more friendly for beginners) or use C with statically allocated memory only. And use books/tutorials. They offer multiple excercises with solutions, so you can compare yours with them and see other approaches.
Once you'll feel that you can actually write something simple, see some book/tutorial for Object Oriented Design. It's not the best the world has to offer, but it might turn out to be intuitive. If not, check the functional programming (like LISP, Scheme or Haskell languages), or programming in logic (like Prolog). Maybe those will suit you better.
Also - find some mate. A person you can talk to about coding, code maintenance and design. Such person is worth even more than a book.
To all C fans: The C language is great, really. It allows memory usage optimization to the extent impossible in high-level languages as Python or Ruby. The compiled code is also very fast, and is the only choice for RTOS, or modern 3D games engine. But this is not a good entry point for a beginner, that's what I believe.
Oh, and good luck to you! And don't be ashamed to ask! If you don't ask, the answer is much harder to find.
Assuming you have decent math skills try http://projecteuler.net/ It presents a series of problems to solve of increasing dificulty that should be solvible by writing short programs. This should give you experience in solving specific problems with out getting lost in the details of open source projects.
After basic language syntax, you need to learn design. Which is hard. This book may help.
I think you should stop thinking you've wasted time so far-- instead I think you're education is just incomplete, and you've taken a step you're not really ready for. It sounds like the books you've read are useful, you're learning the intricacies of the language. It sounds like you're just not accustomed to the tools you'd use then to package that code together so it runs.
Some books cover that focus on topics like language syntax, design patterns, algorithms and data structures will never mention the tools you need to actual apply that information. These books are great but if its all you've touched I think it would explain your situation.
What development environment are you using? If you're developing for windows you really should be proficient with creating projects, adding code, running and debugging in Visual Studio. You can download Visual Studio Express for free from Microsoft.
I recommend looking for tutorial like books that actually step you through the UI of development environment you are using. Look for actual screenshots with dropdown menus. Look at what the tutorials walk you through, and if its something you don't know how to do consider buying that book. Preferably it will have code you can copy'n'paste in, not code you write yourself.
I personally don't like these books as I can anticipate how to do new things in VS based on how I'd do other things. But if you're training is incomplete from a tools-usage perspective this could move you in the right direction.
It is probably harder to find these types of tutorial books for Python or C development. There is an overabundance of them for .Net development though.
As someone who has only been working as a programmer for 6 months, I might not be the best person to help you get going, but since it wasn't that long ago when I knew next to nothing, its quite fresh in my mind.
When I started my current job programming wasn't going to be part of my job description but when the opportunity came up to do some programming on the side, I couldn't pass it up.
I spent about 1 month doing tutorials on About.com's Delphi section. As much as people diss about.com, Zarko Gajic's tutorials were simple to understand and easy to follow. Once I had a basic knack of the language and the IDE, I jumped straight into a project exporting accounting data for a program called "Adept". Took me a while but I got there...
The biggest help for me was taking on a personal project. I developed an IRC bot in Java for a crappy 2D game called Soldat. I learnt a lot by planning out and coding my own project.
Now I'm pretty comfortable with Delphi Pascal, SQL, C# and Java. I think, once you get the hang of one OOP language, you can learn the syntax of another language, and it gets a lot easier to catch on.
Perhaps start with a small existing project, and find some thing within it that handles some core part of what it does - then with a debugger, step through it and follow what it's doing from the point where you ask it to do that thing for you.
This helps you in a number of ways. You start to better grasp all of the various things that are touched by the code as it attempts to complete its request. Also, you learn invaluable debugging techniques which it seems like far too many developers lack - while you can often eventually discover what is wrong either with repeated printf() (or equivalent) calls, if you can debug you can solve issues an order of magnitude faster.
I have found that conceptually, a great mental model for understanding programming in the abstract is a pattern of data flow. When a user manipulates data, how is it altered by a program for digestion and storage? How is it transformed to re-present to the user in a form that makes sense to them? Fundamentally code is about transformation of data, and all code can be broken down into constructs of various sizes whose purpose is to alter data in one way or another, bugs forming around the mismatch between what the programmer was expecting from the data, how high level libraries the coder is using treat the data, and how the data actually arrives. Following code with a debugger helps you fully understand this transformation in action by observing changes as they occur.
Standard answer is to make something; picking an easy language to do it in is good, but not essential. It's more the working out stuff in your own head, fixing it because it won't work, that really teaches you. For me, this always happens when I try my eternal dream projects (games) which I never finish but always learn from.
I think the thing I would avoid is learning a language in isolated snippets that don't really hang together but just teach various facets of a particular language. As others have said, the really hard and important thing is to learn design. I think the best way to do this is through a tutorial that walks you through creating an actual application, teaching design along the way. That way you can learn why certain decisions are made and learn how to accomplish what's needed to implement the design choices.
For example, I found Agile Web Development with Rails to be a really easy way to learn Ruby on Rails, much better than simply reading a Ruby manual or even poking my way around scattered web tutorials.
Another thing that I would avoid is developing code in isolation, that is, not having people look at it as I go along. Getting feedback from a mentor will help keep you on the right track with respect to the choices you are making and the correct use of language idioms.
Find a problem in your life or something you do that you just feel could be more efficient and write a small solution to it. It might just be a single script but you will gain much more confidence in your abilities when you start to see useful results of your work. You will also be more motivated to finish it as you are interested in using the solution. Start simple and small and then gradually move up to bigger projects.
And as your working on a small project, focus on building everything with quality. I think this is lost on some programmers who feel that their software is more impressive if it contains a ton of features but usually those features aren't well done or usable. If you focus on building quality solutions to real problems you'll be a fantastic programmer.
Good luck!
Work on projects/problems that you already know how to solve partially
You should read Mike clark's article : How I Learned Ruby. Essentially, he used the test framework for Ruby to exercise different elemnents of the languages.
I used this technique to learn python and it was very, very helpful. Not only did i learn the language, but I was very proficient in the test framework for Python at the end of the excercise. Once you have the basics you can start reading code and then working on building some larger project.