Minimalist Programming Tools - performance

What tools go well with or help minimalist programming? Examples would be libraries with tight, clean interface and very small size in it's genre.
Techniques, functions or concepts that result in smaller and/or more efficient apps would be great. If you know of any other relevant tools this would help as well.

This may not be quite what you're looking for, but I enjoyed reading A Whirlwind Tutorial on Creating Really Teensy ELF Executables for Linux, which starts out with basic techniques for reducing bloat, before going into far more detail than I thought possible in order to shave every last byte from an executable!

if not assembler, then almost any Forth.

See colorFORTH - minimal and strange ... best of both worlds :)

Related

How to implement Leitner algorithm (spaced repetition)?

In the spaced repetition algorithms, we have a particular one named Leitner. It is used widely for some flashcards based learning systems. The main idea is to sort up the cards with possibilites.
After searching google, it seems like there are no specific implementations in C, C++, or Object-C except some Ruby implementations.
Question here to seek some clues.
Thanks
I feel that the following software might be useful for you -- http://flashqard-project.org/download.php. It's a C++ implementation, and source code is open. You might want to check it out.

(When) Should I learn compilers?

According to this http://steve-yegge.blogspot.com/2007/06/rich-programmer-food.html article, I defnitely should.
Quote Gentle, yet insistent executive
summary: If you don't know how
compilers work, then you don't know
how computers work. If you're not 100%
sure whether you know how compilers
work, then you don't know how they
work.
I thought that it was a very interesting article, and the field of application is very useful (do yourself a favour and read it)
But then again, I have seen successful senior sw engineers that didn’t know compilers very well, or internal machine architecture for that matter,
but did know a thing or two of each item in the following list :
A programming paradigm (OO, functional,…)
A programming language API (C#, Java..) and at least 2 very different some say! (Java / Haskell)
A programming framework (Java, .NET)
An IDE to make you more productive (Eclipse, VisualStudio, Emacs,….)
Programming best practices (see fxcop rules for example)
Programming Principles (DRY, High Cohesion, Low Coupling, ….)
Programming methodologies (TDD, MDE)
Design patterns (Structural, Behavioural,….)
Architectural Basics (Tiers, Layers, Process Models (Waterfall, Agile,…)
A Testing Tool (Unit Testing, Model Testing, …)
A GUI technique (WPF, Swing)
A documenting tool (Javadoc, Sandcastle..)
A modelling languague (and tool maybe) (UML, VisualParadigm, Rational)
(undoubtedly forgetting very important stuff here)
Not all of these tools are necessary to be a good programmer (like a GUI when you just don’t need it)
but most of them are. Where do compilers come in, and are they really that important, since, as I mentioned,
lots of programmers seems to be doing fine without knowing them and especially, becoming a good programmer is seen the multitude of knowledge domains almost a lifetime achievement :-) , so even if compilers are extremely important, isn't there always stuff still more important?
Or should i order 'The Unleashed Compilers Unlimited Bible (in 24H..))) today?
For those who have read the article, and want to start studying right away :
Learning Resources on Parsers, Interpreters, and Compilers
If you just want to be a run-of-the-mill coder, and write stuff... you don't need to take compilers.
If you want to learn computer science and appreciate and really become a computer scientist, you MUST take compilers.
Compilers is a microcosm of computer science! It contains every single problem, including (but not limited to) AI (greedy algorithms & heuristic search), algorithms, theory (formal languages, automata), systems, architecture, etc.
You get to see a lot of computer science come together in an amazing way. Not only will you understand more about why programming languages work the way that they do, but you will become a better coder for having that understanding. You will learn to understand the low level, which helps at the high level.
As programmers, we very often like to talk about things being a "black box"... but things are a lot smoother when you understand a little bit about what's in the box. Even if you don't build a whole compiler, you will surely learn a lot. You will get to see the formalisms behind parsing (and realize it's not just a bunch of special cases hacked together), and a bunch of NP complete problems. You will see why the theory of computer science is so important to understand for practical things. (After all, compilers are extremely practical... and we wouldn't have the compilers we have today without formalisms).
I really hope you consider learning about them... it will help you get to the next level as a computer scientist :-).
You should learn about compilers, for the simple reason that implementing a compiler makes you a better programmer. The compiler will surely suck, but you will have learned a lot during the way. It is a great way of improving (or practising) your programming skill.
You do not need to understand compilers to be a good programmer, but it can help. One of the things I realized when learning about them, is that compiling is simply a translation.
If you have ever translated from one language to another, you have just done compiling.
So when should you learn about compilers?
When you want to, or need it to solve a problem.
Compiler theory is useful, but not essential.
Although there are some techniques which come in handy, like lexical analysis and parsing.
Another one is error handling. Compilers need a lot of these. User input can contain anything, even the unexpected. And you need to deal with all of these.
If you're going to be working at a high-enough level where you're worrying over UML and self-describing code, you could easily go your entire career without wanting or needing intimate details of how the compiler works.
But, if you're an in-the-trenches coder and have no aspirations to manage your friends, it's likely that one day, you'll realize you're waging war with your compiler. It could be a random bug that comes along or a hallway conversation about while-verses-for loops. You'll realize the assembly (or IL, likely, in the coming years) is just a bit to the left of what you were needing and another universe will unfold.
So, I suppose my answer is, just be aware of the compiler for now, that it's doing quite a lot, but don't worry over it too much.
The compilers courses usually focus on how the high-level code is analyzed and translated into machine code. That's very interesting, but not crucial. It's more important to understand what is this machine code that is generated by the compiler so that you understand how a computer works and what is the cost of each language construct.
So I'd rather say that you should know an assembly language (I mean a limited subset of assembly language for one architecture) to understand how a computer works and the latter is definitely required for a competent programmer so that he understands what segmenation fault is, when to optimize and when not and other similar low-level things.
If you intend to write extremely time-critical real-time code, you will benefit from understanding how the compiler optimises your code. However, you will actually benefit more from understanding the underlying architecture of your hardware.
From my experience, if you understand how the hardware works, and how the compiler interprets your code, you will be able to write code that does exactly what you intend it to do. I have been caught on several occasions, writing code that got optimised away by the compiler and made the hardware do something that I did not intend.
All in all, understanding the entire software-hardware stack is not essential to write good algorithms and code, but it will most certainly help!
From a practical perspective, general compiler theory is less of concern than a assembler, linker and loader to a specific platform. For example, I just consider the GCC compiler as a translator from my high-level C language to the low-level assembly language on a x86 platform. And more often than not, I manually refine ;) the code generated by the compiler.
From a scientific perspective, I would strongly suggest you learning the compiler theory, it will help you understand the great idea that computer is built upon. And even more, you will have a different eye upon the world.
Just my opinion, but I believe compilers is not given enough attention in CS courses, not in mine, and not in any others afaik. I think any CS major should do 2 things after a sabbatical or finishing their major: Re-learn if necessary finite automata and maybe a formal methods language. Apply it.
Write a simple compiler with this knowledge. Alex Aiken has a very useful online tutorial on writing a compiler for the COOL (Classroom Object Oriented Language) which is a subset of Scala as of 2013 ver. At least at time of writing.

Minimum CompSci Knowledge Needed for Writing Desktop Apps

Having been a hobbyist programmer for 3 years (mainly Python and C) and never having written an application longer than 500 lines of code, I find myself faced with two choices :
(1) Learn the essentials of data structures and algorithm design so I can become a l33t computer scientist.
(2) Learn Qt, which would help me build projects I have been itching to build for a long time.
For learning (1), everyone seems to recommend reading CLRS. Unfortunately, reading CLRS would take me at least an year of study (or more, I'm not Peter Krumins). I also understand that to accomplish any moderately complex task using (2), I will need to understand at least the fundamentals of (1), which brings me to my question : assuming I use C++ as the programming language of choice, which parts of CLRS would give me sufficient knowledge of algorithms and data structures to work on large projects using (2)?
In other words, I need a list of theoretical CompSci topics absolutely essential for everyday application programming tasks. Also, I want to use CLRS as a handy reference, so I don't want to skip any material critical to understanding the later sections of the book.
Don't get me wrong here. Discrete math and the theoretical underpinnings of CompSci have been on my "TODO: URGENT" list for about 6 months now, but I just don't have enough time owing to college work. After a long time, I have 15 days off to do whatever the hell I like, and I want to spend these 15 days building applications I really want to build rather than sitting at my desk, pen and paper in hand, trying to write down the solution to a textbook problem.
(BTW, a less-math-more-code resource on algorithms will be highly appreciated. I'm just out of high school and my math is not at the level it should be.)
Thanks :)
This could be considered heresy, but the vast majority of application code does not require much understanding of algorithms and data structures. Most languages provide libraries which contain collection classes, searching and sorting algorithms, etc. You generally don't need to understand the theory behind how these work, just use them!
However, if you've never written anything longer than 500 lines, then there are a lot of things you DO need to learn, such as how to write your application's code so that it's flexible, maintainable, etc.
For a less-math, more code resource on algorithms than CLRS, check out Algorithms in a Nutshell. If you're going to be writing desktop applications, I don't consider CLRS to be required reading. If you're using C++ I think Sedgewick is a more appropriate choice.
Try some online comp sci courses. Berkeley has some, as does MIT. Software engineering radio is a great podcast also.
See these questions as well:
What are some good computer science resources for a blind programmer?
https://stackoverflow.com/questions/360542/plumber-programmers-vs-computer-scientists#360554
Heed the wisdom of Don and just do it. Can you define the features that you want your application to have? Can you break those features down into smaller tasks? Can you organize the code produced by those tasks into a coherent structure?
Of course you can. Identify any 'risky' areas (areas that you do not understand, e.g. something that requires more math than you know, or special algorithms you would have to research) and either find another solution, prototype a solution, or come back to SO and ask specific questions.
Moving from 500 loc to a real (eve if small) application it's not that easy.
As Don was pointing out, you'll need to learn a lot of things about code (flexibility, reuse, etc), you need to learn some very basic of configuration management as well (visual source safe, svn?)
But the main issue is that you need a way to don't be overwhelmed by your functiononalities/code pair. That it's not easy. What I can suggest you is to put in place something to 'automatically' test your code (even in a very basic way) via some regression tests. Otherwise it's going to be hard.
As you can see I think it's no related at all to data structure, algorithms or whatever.
Good luck and let us know
I must say that sitting down with a dry old textbook and reading it through is not the way to learn how to do anything effectively, even if you are making notes. Doing it is the best way to learn, using the textbooks as a reference. Indeed, using sites like this as a reference.
As for data structures - learn which one is good for whatever situation you envision: Sets (sorted and unsorted), Lists (ArrayList, LinkedList), Maps (HashMap, TreeMap). Complexity of doing basic operations - adding, removing, searching, sorting, etc. That will help you to select an appropriate library data structure to use in your application.
And also make sure you're reasonably warm with MVC - i.e., ensure your model is separate from your view (the QT front-end) as best as possible. Best would be to have the model and algorithms working on their own, and then put the GUI on top. Or a unit test on top. Etc...
Good luck!
It's like saying you want to move to France, so should you learn french from a book, and what are the essential words - or should you just go to France and find out which words you need to know from experience and from copying the locals.
Writing code is part of learning computer science. I was writing code long before I'd even heard of the term, and lots of people were writing code before the term was invented.
Besides, you say you're itching to write certain applications. That can't be taught, so just go ahead and do it. Some things you only learn by doing.
(The theoretical foundations will just give you a deeper understanding of what you wind up doing anyway, which will mainly be copying other people's approaches. The only caveat is that in some cases the theoretical stuff will tell you what's futile to attempt - e.g. if one of your itches is to solve an NP complete problem, you probably won't succeed :-)
I would say the practical aspects of coding are more important. In particular, source control is vital if you don't use that already. I like bzr as an easy to set up and use system, though GUI support isn't as mature as it could be.
I'd then move on to one or both of the classics about the craft of coding, namely
The Pragmatic Programmer
Code Complete 2
You could also check out the list of recommended books on Stack Overflow.

Feature bloat - how much is too much?

I'm a computer science student designing a project and I've started wondering what are good examples or software, or even hardware that are toeing the line between being feature rich with good usable features for regular users and being too intimidating for new users. Also could anyone recommend any good tips/books for designing good quality applications that are feature rich but not "bloated"?
"Make everything as simple as possible, but not simpler." - Albert Einstein
"Perfection is reached not when there is nothing left to add, but when there is nothing left to take away." - Antoine de Saint-Exupéry
I am not trying to be flippant but these quotes really are the best advice. Simplicity of design should be your goal. Not that achieving simplicity is easy! On the contrary, it is quite difficult but it is possible.
Try thinking about things a bit differently. Rather than
How many things can I add before this becomes bloated?
try
What are the fewest number of features and elements I can include while still providing a superior experience for my users?
Here's a good set of slides from a presentation on the topic: Rescue Princess 2.0.
The first order of business should just be keeping the application easy to use. Beyond that, all I can say is, beware of writing features for an imaginary user: make sure someone actually needs it before you start coding.
As a direct answer to your question: pretty much any Microsoft product. I'm showing my bias here, but Microsoft has a strong tendency to keep their codebase, and add features on top of features until the original functionality of the app is nearly lost beneath mounds of accreted crud.
Look at MS Word, for example; while you can still just open it up and start typing, god forbid if you want to renumber a section of your document while leaving the rest alone. Heaven forbid if you want to generate a Table of Contents that includes references to an Appendix. This sort of stuff is something that is de rigeur for Word Processors, and Word supports it, it just supports it in a way that you cannot get it done without a manual, several cups of coffee, and bandages to stop the bleeding from banging your head on the desk.
Microsoft isn't alone in doing this; this thing tends to happen all the time, with all sorts of products; but they are among the worst offenders, I've found.
1: What do your users need, and want, and
2: Which features will you have time to implement?
Your question is pretty general. Which features constitute bloat? That kind of depends on whether you're writing an antivirus scanner, an OS or a word processor.
There is no clear barrier between "good" and "too much".
However, it depends on what you want to do.
If you're developing a SDK, I recommend splitting your implementation in several small libraries(rather than just one big SDL library, there is the SDL core, SDL_Mixer, SDL_Image, etc.)
If you're developing an application, keep a module-based system and a plug-in mechanism.
That way, new features can be added more easily and bloat can be more easily detected.
You may get to a point where you'll add new features some will consider "great" and others "bloat". Otherwise, your application may reach a point that some will call it "feature-poor" and others will call it "just enough".
This isn't an exact quote, but the idea was something like this:
A piece of software is perfect not when there is nothing more to add, but when there is nothing more to remove.
In essence, the simpler and more to-the-point is a software, the better.
To get examples of good software design, take a look at programs that are popular today. Google applications would be a nice place to look. Skype perhaps. Heh, even StackOverflow. :)
If you want intimidating, go to the world of CAD. Check out for example Blender. That's a freeware 3D designer software. Good tool I'm told, but the UI has so many buttons/panels/menus/etc. that it makes baby bunnies cry. Unfortunately I cannot say if this would be a good example of a "bad" UI. 3D designing is a very complex process and all those tools are probably in the right place. But it's definately intimidating. :)
A bad UI design can often be found with propieritary software that comes with propieritary hardware. Unfortunately I cannot give you any examples from the top of my head.
I always tend to design my projects in a way that they're just skeletons which are as extensible as possible. Limiting factors are performance, complexity or Thirdparty-limitations.
This way you could add additional features after finishing the basic structure. A user could also add his needed features.
This probably does not work very good for GUI-applications which should have a good usability without much configuration, but I'm sticking good with this approach for those libs I develop. (They're used by other coders who like to have a highly modifable piece of software)
It's not very hard to develop an application/lib which is bloated with features. But it is to develop an app which could be easily extended by other developers/users to match their own needs.
Develop a wide-ranging plug-in system so you add and take out stuff at any time. Problem solved. If only that was as easy as writing spaghetti code. ;)

What are the biggest time wasters for learning programming? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've had several false starts in the past with teaching myself how to program. I've worked through several books (mostly C and Python), and end up just learning the syntax without feeling as though I could sit down and actually write a program for myself. When I try to look through the source trees of a project on Codeplex or Sourceforge, I never seem to know where to start reading the code -- the dependencies seem to go in all directions.
I feel as though I'm not learning programming the way it's done "on the street," so I figured I'd take a different approach to asking how a newbie should learn how to code. If you had to learn programming all over again, what are the things you wouldn't do? What did you spend time doing that you now know wasted you weeks or months?
Where I see beginners wasting weeks or months is typing at the keyboard. The computer is very responsive and will cheerfully chew up hours of your time in the edit-compile-run cycle. If you are learning you will save many hours if
You plan out your design on paper before you approach a computer. It doesn't matter what design method you pick or if you have never heard of a design method. Just write down a plan while your brain is fully engaged and not distracted by the computer.
When code will not compile or will not produce the right answer, if you can't fix it in five minutes, walk away from the computer. Go think about what's happening. Print out your code and scribble on it until you believe it's right.
These are just devices for helping to implement the simple but difficult old advice to think before you code.
When I was learning, I solved countless problems on the 15-minute walk from the computing center to my home. Sadly, with modern PCs we don't get that 15 minutes :-) If you can learn to take it anyway, you will become a better programmer, faster.
I certainly wouldn't start by looking at "real" software projects. Like you say, it's too hard to know where to start. That's largely because large projects are more about their large-scale design than about the individual algorithms or about program flow; for one thing, you're probably looking at a complex GUI application with multi-threading, etc. There isn't really anywhere to "start" looking at the code.
The best way to learn programming is to have a problem you want (need) to solve, and then going about solving it. But most importantly, WRITE CODE. When you read programming books, do ALL the exercises. Make sure you did them right. There's no substitute for writing code. No substitute for screwing up and then fixing it.
Stack Over F.. wait no, heh.
The biggest time-sinks for me are generally in respect to "finding the best answer." I often find that I will run into a problem that I know how to solve but feel that there is a better solution and go on the hunt for it. It is only hours/days later that I come to my senses and realize that I have 7 instances of Firefox, each containing at least 5 tabs sprawled out across 46" of monitor space that I realize that I've been caught in the black hole that is the pursuit of endless knowledge.
My advice to you, and myself for that matter, is to become comfortable with notion of refractoring. Essentially what this means (incase you are are not familiar with the term) is you come up with a solution for a problem and go with it, even if there is quite likely a better way of doing it. Once you have finished the problem, or even the program, you can then revisit your methodology, study it, and figure out where you can make changes to improve it.
This concept has always been hard for me to follow. In college I preferred to to write a paper once, print, and turn it in. Writing code can be thought of very similarly to writing a paper. Simply putting the pen to the pad and pushing out whats on your mind may work - but when you look back over it with a fresh pair of eyes you will, without question, see something you will wish you had done differently.
I just noticed you talked about reading through source trees of other people's projects. Reading other people's code is a wonderful idea, but you must read more selectively. A lot of open-source code is hard to read and not stuff you should emulate anyway. So avoid reading any code that hasn't been recommended by a programmer you respect.
Hint: Jon Bentley, Brian Kernighan, Rob Pike, and P. J. Plauger, who are all programmers I respect, have published a lot of code worth reading. In books.
The only way to learn how to program is to write more code. Reading books is great, but writing / fixing code is the best way to learn. You can't learn anything without doing.
You might also want to look at this book, How to Design Programs, for more of a perspective on design than details of syntax.
The only thing that I did that wasted weeks or months was worry about whether or not my designs were the best way to implement a particular solution. I know now that this is known as "premature optimization" and we all suffer from it to one degree or another. The right way to learn programming is to solve a problem, measure your solution to make sure it performs good enough, then move on to the next problem. After some time you'll have a pile of problems you've solved, but more importantly, you'll know a programming language.
There is excellent advice here, in other posts. Here are my thoughts:
1) Learn to type, the reasons are explained in this article by Steve Yegge. It will help more than you can imagine.
2) Reading code is generally considered a hard task. So, it is better to get an open source project, compile it, and start changing it and learn that way, rather than reading and trying to understand.
I can understand the situation you're in. Reading through books, even many will not make you programmer. What you need to do is START PROGRAMMING.
Actually programming is a lot like swimming in my opinion, even if you know only a little syntax and even lesser amount of coding techniques, start coding anyway. Make a small application, a home inventory, an expense catalog, a datesheet, a cd cataloger, anything you fancy.
The idea is to get into the nitty-gritties of it. Once you start programming you'll run into real-world problems and your problem solving skills will develop as you combat them. That's how you become a better programmer everyday.
So get into the thick of it, and swim right through... That's how you'll make it.
Good luck
I think this question will have wildly different answers for different people.
For myself, I tried C++ at one point (I was about ten and had already been programming for a while), with a click-and-drag UI builder. I think this was a mistake, and I should have gone straight to C and pointers and such. Because I'm just that kind of person.
In your case, it sounds like you want to be led down the right path by someone and feel a bit timid about jumping in and doing something by yourself. (You've read several books and now you're asking what not to do.)
I'll tell you how I learned: by doing plenty of fun, relatively short projects, steadily growing in difficulty. I began with QBasic (which I think is still a great learning tool) and it was there where I developed most of my programming skills. They have of course been expanded and refined since that time but I was already capable of good design back in those days.
The sorts of projects you could take on depend on your interests; if you're mathematically inclined you might want to try a prime number generator or projecting 3D points onto the screen; if you're interested in game design then you could try cloning pong (easy) or minesweeper (harder); or if you're more of a hacker you might want to make a simple chat program or file encryption software.
Work on these projects on your own, and don't worry about whether you're doing things the "right" way. As long as you get it to work, you've learned many things. Some time after you've completed a project you may want to revisit it and try to do it better, or just see how other people have done that sort of thing.
Given the way you seem to want to be led along, perhaps you should find yourself a mentor.
Do not learn how to use pointers and how to manually manage memory. You mentioned C, and I spent plenty of time trying to fix bugs that were caused by mixing *x and &x. This is evil...
Find some problem to solve, write or draw a sketch of an algorithm solving the problem, then try to write it. Either use Python (which is much more friendly for beginners) or use C with statically allocated memory only. And use books/tutorials. They offer multiple excercises with solutions, so you can compare yours with them and see other approaches.
Once you'll feel that you can actually write something simple, see some book/tutorial for Object Oriented Design. It's not the best the world has to offer, but it might turn out to be intuitive. If not, check the functional programming (like LISP, Scheme or Haskell languages), or programming in logic (like Prolog). Maybe those will suit you better.
Also - find some mate. A person you can talk to about coding, code maintenance and design. Such person is worth even more than a book.
To all C fans: The C language is great, really. It allows memory usage optimization to the extent impossible in high-level languages as Python or Ruby. The compiled code is also very fast, and is the only choice for RTOS, or modern 3D games engine. But this is not a good entry point for a beginner, that's what I believe.
Oh, and good luck to you! And don't be ashamed to ask! If you don't ask, the answer is much harder to find.
Assuming you have decent math skills try http://projecteuler.net/ It presents a series of problems to solve of increasing dificulty that should be solvible by writing short programs. This should give you experience in solving specific problems with out getting lost in the details of open source projects.
After basic language syntax, you need to learn design. Which is hard. This book may help.
I think you should stop thinking you've wasted time so far-- instead I think you're education is just incomplete, and you've taken a step you're not really ready for. It sounds like the books you've read are useful, you're learning the intricacies of the language. It sounds like you're just not accustomed to the tools you'd use then to package that code together so it runs.
Some books cover that focus on topics like language syntax, design patterns, algorithms and data structures will never mention the tools you need to actual apply that information. These books are great but if its all you've touched I think it would explain your situation.
What development environment are you using? If you're developing for windows you really should be proficient with creating projects, adding code, running and debugging in Visual Studio. You can download Visual Studio Express for free from Microsoft.
I recommend looking for tutorial like books that actually step you through the UI of development environment you are using. Look for actual screenshots with dropdown menus. Look at what the tutorials walk you through, and if its something you don't know how to do consider buying that book. Preferably it will have code you can copy'n'paste in, not code you write yourself.
I personally don't like these books as I can anticipate how to do new things in VS based on how I'd do other things. But if you're training is incomplete from a tools-usage perspective this could move you in the right direction.
It is probably harder to find these types of tutorial books for Python or C development. There is an overabundance of them for .Net development though.
As someone who has only been working as a programmer for 6 months, I might not be the best person to help you get going, but since it wasn't that long ago when I knew next to nothing, its quite fresh in my mind.
When I started my current job programming wasn't going to be part of my job description but when the opportunity came up to do some programming on the side, I couldn't pass it up.
I spent about 1 month doing tutorials on About.com's Delphi section. As much as people diss about.com, Zarko Gajic's tutorials were simple to understand and easy to follow. Once I had a basic knack of the language and the IDE, I jumped straight into a project exporting accounting data for a program called "Adept". Took me a while but I got there...
The biggest help for me was taking on a personal project. I developed an IRC bot in Java for a crappy 2D game called Soldat. I learnt a lot by planning out and coding my own project.
Now I'm pretty comfortable with Delphi Pascal, SQL, C# and Java. I think, once you get the hang of one OOP language, you can learn the syntax of another language, and it gets a lot easier to catch on.
Perhaps start with a small existing project, and find some thing within it that handles some core part of what it does - then with a debugger, step through it and follow what it's doing from the point where you ask it to do that thing for you.
This helps you in a number of ways. You start to better grasp all of the various things that are touched by the code as it attempts to complete its request. Also, you learn invaluable debugging techniques which it seems like far too many developers lack - while you can often eventually discover what is wrong either with repeated printf() (or equivalent) calls, if you can debug you can solve issues an order of magnitude faster.
I have found that conceptually, a great mental model for understanding programming in the abstract is a pattern of data flow. When a user manipulates data, how is it altered by a program for digestion and storage? How is it transformed to re-present to the user in a form that makes sense to them? Fundamentally code is about transformation of data, and all code can be broken down into constructs of various sizes whose purpose is to alter data in one way or another, bugs forming around the mismatch between what the programmer was expecting from the data, how high level libraries the coder is using treat the data, and how the data actually arrives. Following code with a debugger helps you fully understand this transformation in action by observing changes as they occur.
Standard answer is to make something; picking an easy language to do it in is good, but not essential. It's more the working out stuff in your own head, fixing it because it won't work, that really teaches you. For me, this always happens when I try my eternal dream projects (games) which I never finish but always learn from.
I think the thing I would avoid is learning a language in isolated snippets that don't really hang together but just teach various facets of a particular language. As others have said, the really hard and important thing is to learn design. I think the best way to do this is through a tutorial that walks you through creating an actual application, teaching design along the way. That way you can learn why certain decisions are made and learn how to accomplish what's needed to implement the design choices.
For example, I found Agile Web Development with Rails to be a really easy way to learn Ruby on Rails, much better than simply reading a Ruby manual or even poking my way around scattered web tutorials.
Another thing that I would avoid is developing code in isolation, that is, not having people look at it as I go along. Getting feedback from a mentor will help keep you on the right track with respect to the choices you are making and the correct use of language idioms.
Find a problem in your life or something you do that you just feel could be more efficient and write a small solution to it. It might just be a single script but you will gain much more confidence in your abilities when you start to see useful results of your work. You will also be more motivated to finish it as you are interested in using the solution. Start simple and small and then gradually move up to bigger projects.
And as your working on a small project, focus on building everything with quality. I think this is lost on some programmers who feel that their software is more impressive if it contains a ton of features but usually those features aren't well done or usable. If you focus on building quality solutions to real problems you'll be a fantastic programmer.
Good luck!
Work on projects/problems that you already know how to solve partially
You should read Mike clark's article : How I Learned Ruby. Essentially, he used the test framework for Ruby to exercise different elemnents of the languages.
I used this technique to learn python and it was very, very helpful. Not only did i learn the language, but I was very proficient in the test framework for Python at the end of the excercise. Once you have the basics you can start reading code and then working on building some larger project.

Resources