Is a preference for brute force solutions a bad sign? [closed] - algorithm

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm a beginner C++ programmer, and to stretch my mind I've been trying some of the problems on projecteuler.net. Despite an interest in maths at school, I've found myself automatically going for brute force solutions to the problems, rather than looking for something streamlined or elegant.
Does this sound like a bad mindset to have? I feel a bit guilty doing it like this, but maybe quick and dirty is OK some of the time...

I think you should look at what your end goal is and what your constraints are.
Sometimes a bruteforce method can solve a problem in 50ms trying out every combination of solutions and a "clever" solution can solve it in 10ms. At that point, the less clever but easier to understand solution trumps the clever solution.
However, there are some problems where brute forcing will not only be inelegant but simply won't work. There are many problems where if you attempt to naively brute force them it will take a significant amount of time to solve them. So obviously, these types of problems need a more elegant approach.
So ask yourself, why you are attempting these Project Euler problems? Are you doing it to learn? Then maybe trying a clever solution would be in your best interest but only after you have initially tried a brute force solution to help get a grasp of the problem.
When doing the Python Challenge problems I try to do it the most succinct way I can, pushing the limits of my abilities. After I solve it I then review other peoples answers and take mental notes of people who were more clever than myself and what they did. Some people will make special use of a data structure I hadn't thought of that is more suited to the task or they will have little mathematical tricks they use to make their algorithm more efficient. In the end I try to absorb as much of their cleverness as I can and make it show the next time I'm presented with a problem of a similar nature.

As a beginner programmer, you will be spending more of your mental energy figuring out how to actually implement things in C++, rather than spending energy on finding a clever solution to each problem. This is fine, because it gives you the opportunity to explore different areas of C++ while working on a range of various kinds of problems.
When you become proficient in C++ and you don't have to think about how to do every little thing, then you will be able to spend more time inventing non-brute-force solutions.

No, this isn't a bad thing. I've had solutions that were so elegant they were wrong.

The elegant solutions weren't created spontaneously; they were derived from the brute-force solutions when more speed or less memory consumption were required from the current solution.
So no, it's not. It's how the elegant solutions came into being.

Ken Thompson: "When in doubt, use brute force"

I've sort of gone through this evolution:
Get it to compile
Make it work as expected
Figure out one solution that works
Figure out one good solution
Figure out multiple solutions, and find the best
Figure out multiple solutions, and find the best for this situation
?? haven't gotten there yet

I would say that no, it's not a bad sign. In fact you're doing yourself a favor by trending away from premature optimizations, which is definitely a Good Thing.

learning is a brute force process. I wouldn't say its bad. In trying to do something that way you may notice a pattern. I think as long as you are thinking about something and trying to find solutions you will learn. There are few people who just jump to the most elegant or efficient solutions.
It would be hard to convince me that people that are trying to learn could ever be called bad. Except maybe an evil scientist :P
good luck.

Do you fit inside the 1 minute runtime rule for the problems? If yes, then your "brute force" solution fulfils all the requirements, and that's actually a very good sign that you can quickly come up with something that works!
These kinds of problems encourage micro-optimisation and very clever algorithms, but in general a very readable straightforward implementation will be much easier to maintain, and will be favoured in the business world.

If it happens to be a situation where "brute force" => "simple" and "elegant" => "complex", then brute force wins. And this is very often true.

Not at all. Get the problem solved correctly and completely then make it more performant or elegant as necessary.
That's not to say you should ignore obvious performance improvements... Just don't focus on them until you understand the problem better.

To put this in a different context:
When you use a library that you don't know very well (for creating UI, for instance) you can solve a simple problem in a perfectly performant way, though you know there's a "correct way" to do it. If you are curious and worried that your brute-force code makes you look like a moron, you will soon find the "correct way" to do it (e.g., on weekends, or while you sleep). In the meantime, through brute force, you will have something that works.
I actually forget to use brute force sometimes, and start scanning the API for the "right" solution. This is definitely an error in many cases. If the brute force solution is easy to implement, scales as you need it to (really, if it works), then forget about the correct solution. You'll find it soon enough (and many times you already knew it!), but in the meantime, you solved the problem and were able to go on to the next one.
Roadblocks are terrible when coding, and should definitely be avoided more than brute force solutions.

It's definitely not a bad sign to trend to brute force, especially as a beginner because you may not know any better. Especially with Project Euler, it is a bad sign to implement a brute force method and not review the comments to learn a more efficient method.
I often end up in the same boat you're in and that's actually why I started doing P.E. problems -- I was implementing a lot of brute force approaches and wanted to expose myself to more elegant solutions...

You have weigh your option. If the brute force solution will get the job done and perform ok, it is a good solution.

Related

When debugging, how do you estimate if you should rewrite or keep looking?

You have all met this scenario. I am using a new algorithm for the first time, and I am sitting at my computer trying to find out if it is some syntax problem or if I have misunderstood the algorithm. In a scenario like this, I would sooner rewrite the program than spend time staring at the screen. But this raises the general question, that I am curious to hear from programmers more experienced than myself:
How do you judge when it is the right time to rewrite, or should you continue sitting there staring at your code looking for the bug?
Are there any useful heuristics that professional programmers use?
This doesn't sound right to me at all. As long as you don't understand the (required) algorithm you should not write code. That's called trial and error and is a pretty sure way to end up with poor and buggy code. Think before you act.
As food for thought a bit provocative statement:
Writing code is the last thing to do. Being a coder might sound like
you should be typing a lot but in fact if I count how many characters
an average programmer commits each day and ask some secretary to type
the same amount he/she would be done in under 30 minutes.
It's a very difficult question and really depends on your case.
Typically, there are two cases :
the algorithm is simple and you will find the bug fast. A rewrite is generally not necessary unless you want to optimize it.
the algorithm is really complex : it's difficult to find the bug. But It can be difficult to rewrite because you can miss some subtile features of the algorithm. The risk is to have a new algorithm but with new bugs !
I don't think there is a clean answer to that problem. I would say that it's better to find the bug than rewriting all. Rewriting is necessary when you need to optimize or clean the code (not because you don't find a bug).
That's my two cents.
You should never rewrite something that has already gone through testing more than once.. the theory being that if you went through it at least once, you've already ironed out bugs. Trying to recreate the end-result of those resolved bugs is very difficult. Joel has a very good article on this, and I tend to agree, even though I have been in the position you are, and my inclination was to just throw it away and rewrite it..
http://www.joelonsoftware.com/articles/fog0000000069.html

Efficient way of Writing algorithm

I was wondering when some one asks you to solve an algorithmic problem, is it a good way to actually start off with Hastable, Hashset or HashMap. Normally i have heard people saying you shouldn't come up with Hashes as your first answer.
So how should we go about in algorithms: In-place should be given importance or make sure time complexity is best
I'm not trying to generalise, but still some suggestions would be helpful.
Thanks
The best you can hope for is a generalized answer for your generalized question.
It depends.
The reason there are many different algorithms is because there is not always 1 algorithm that is the best. And many algorithms aim to solve different problems from each other. Some algorithms it makes no sense to even talk about hash tables.
If someone asks me to solve an algorithmic problem though, I will probably try to use something that is built in to the language I'm using before designing my own algorithm. The reason is because I value my time. If I find later that the code is not efficient enough, then I can look for a better way to do it.
I think it is really situational. If random access is a priority and you need fast access and little constraint on memory utilization and no sequential access, then Hashtable, (et al), is the choice.

Problem solving/ Algorithm Skill is a knack or can be developed with practice? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Every time I start a hard problem and if can not figure out the exact solution or can not get started, I get into this never ending discussion with myself, as below:
That problem
solving/mathematics/algorithms skills
are gifted (not that you can learn
by practicing, by practice, you only
master the kind of problems that you
already have solved before)
only those who went to good schools can do it, as they learned it early.
What are your thoughts, can one achieve awesomeness in problem solving/algorithms just by hard work or you need to have that extra-gene in you?
I spent a big part of my life wondering whether talent was something you developed or something you were born with. Then it occurred to me that the answer was irrelevant, at least if you want to achieve things yourself. Even if you have talent, it will only help you if you act as if talent only comes from practice, because you will work that much harder.
With regards to algorithms, as well as any other really difficult skill, it takes practice to get good. Whether or not you have to have some amount of talent too, I don't know. I do know for a fact, however, that people have made huge improvements in competitions like TopCoder by practicing. I myself have learned a lot from that.
If you set up a systematic training program, you will be way ahead of the pack, even if it is not perfect. I have written a few hundred programs on TopCoder by now and it has affected my thinking in a profound way. I have learned a lot of things that could only ever be learned by doing them wrong and then fixing my mistake. A friend of mine has written several thousand programs on TopCoder and he is way better than I am, even though his stats were worse when he started out than mine were. That is no coincidence.
EDIT:
I just came across this answer at math.stackexchange. I think it is one of the best explanations of how to learn algorithms I have read, even though he writes about chess and math.
1) Don't try to solve the problem in its most general abstraction.
2) Choose the right time when your mind is working at maximum.
I got the first point as an advice from a math instructor. It works! try to do different examples and scenarios of the problem. This helps greatly in identifying the edge cases which are the hardest to understand in most problems.
My favorite time for solving this kind of problems is the dawn(4-6 AM). Have a good sleep the night before, and wakeup ready to solve the problem. Silence is your friend.
I do believe that some people have extra intelligence than others, but it is not the most important factor. It is how you utilize this intelligence to solve the problem.
I took magic lessons in a group setting when I was twelve years old. The magician's name was Joe Carota. He did a magic trick one time and I blurted out, "How did you do that?" He said something that day that has stuck with me ever since.
Joe's response, "Michael, if you really want to know how that trick is done you must figure out how you would do it yourself."
Well of course that's not what I wanted to hear but it did get my mind focused on problem solving. This was problem solving from my perspective. If my first attempt at solving the problem took seventeen steps and was really klunky, the good news was I solved the problem.
Then by looking at the solution I had developed and further looking for ways to refine that solution I would learn how to streamline the end result. Later on in my computer programming life I found out that this process was called "Stepwise Refinement".
It worked back in 1971 and it still works today.
For me, i think it's a bit talent, but much more important is experience and practice. If you know many problems and the best solutions to them, you can come up more easily with a solution to a new problem.
Example from my own past: There was some programming contest (good for training, btw) and I did not find a good solution. The winner solved the problem mainly by using a KD-Tree. To come up with this, you first of all need to know what, in this case, a KD-Tree is, and where it's useful. Today, this is clear to me and if i'd encounter a similar problem again, i'd be able to solve it really quickly.
Hardwork beats talent if talent doesn't work hard.
This above statement defines what the true potential of persistence is.Any skill in this world can be developed by practice.This process is analogous to nailing a nail in the wall.It not only requires correct magnitude but also appropriate direction.
To answer the question, first we need to find the ingredients for the capability to solve an issue.
There is a so-called natural talent. This is the talent you are born with. This predetermines your potential. People born with more gray matter will tend to perform better than people with whom nature was less generous with. This means that a person having better talent has a higher probability to perform better than a person not as talented if they had the same parameters (education, personality, resistance to stress, willpower). If one observes that he or she tends to consume a great time to absorb new information until he or she is able to apply it, then the wisest decision for the person is to leave programming and prevent a life full of frustration. Naturally, one cannot expect as a beginner to be able to instantly understand the most complex phenomenon, but if a beginner is too slow to understand beginner concepts, then programming is not his or her cup of tea.
Developed talent. One has a natural talent, but that is, in itself not enough to solve problems. I have never seen newborns writing code. One has to get some education. The earlier, the better. Also, the quality of school is of high importance. We should never deny the fact that a person who did not have the chance to learn programming at a good school early, then he or she has a handicap in the race for success. However, if someone misses good schools early, then the handicap can be covered with hard work. For instance, my wife had an education in another field, but after finishing the university, she did not find proper jobs. So I started to educate her. After a month she learned how to learn and was able to solve almost any problems presented to her, but she was not yet effective. She gradually became to start learning in auto-didacting manner. After a year she was already a professional coder. She does not have a paper from a school that she can code, but she is doing a fantastic job. So, she missed early education, but was later able to neutralize the handicap. Developed talent can be described as the set of information learned and known, along with the right attitude, the scientific approach to new types of challenges.
Practice: Practice is good to increase the level of developed talent, yet, it SHOULD not be the sole source of developing talent. Along with practice, the theoretical horizons must be regularly expanded.
Working strategy: One can be extremely talented, can have a lot of knowledge. If he or she does not have a right working strategy, then he or she has a handicap. Whenever a new task is given, the right questions should be asked:
what was the closest task to this one? Can I reuse my solution to an extent?
what should I learn to be able to solve this problem?
how can I write clear and efficient code to solve the problem?
So the answer is: while it is good to have excellent education as early as possible, it is not necessary. Do not forget, that life is the best school and you can recuperate the lost opportunity later if you have talent, willpower and source of information. Practice is not only showing you the right steps to solve a problem, it also widens your horizons. For instance, if one understands number systems, then he or she will be able to understand a variety of things later, like colors in CSS, PSD, or number overflows. If one learns how to code in Java, then he or she will understand C# very quickly. So, practice is giving you knowledge about the solution to a given problem type, but also, gives new theoretical knowledge which will be useful in various areas. The core skill one has to develop is the ability to learn quickly.
There have been many examples of people having extraordinary talent with minimum success. You see such examples in sports,politics,business and also in general around you. So, I feel after a certain limit, talent is a meaningless virtue. Its mostly the hard word that rewards you with greater success. If you follow cricket, here is a link with good example.
I feel same principle applies to algorithm and problem solving. An year back I use to pick up algorithmic problems to solve and used to find myself completely lost. An year invested in reading algorithmic books, solving its exercises and also practicing some more programming problems, I am confident that now I can solve most problems ( I still have a long way to go in making myself efficient in it). But the point is smart work is enough to develop this knack of solving problems.
Talent is cheap and useless without hardwork. Talent can only take you to some extent, but with hardwork and practice anybody can reach great heights
- Josh Waitzkin, 8-time National Chess Champion, a 13-Time National and 2-time World Champion
He himself says this in his voice over in Chessmaster Grandmaster Edition

Is it correct to ask to solve an NP-complete problem on a job interview? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Today there was a question on SO, where the author was given an NP-complete problem during an interview and he obviously hadn't been told that it was one.
What is the purpose of asking such questions? What behavior does the interviewer expect when asking such things? Proof? Useful heuristics? And is it even legitimate to ask one if it's not a well-known NP-complete problem everyone should know about? (there's a plenty of them)
Completely legitimate to me. If you are Computer Science professional there are good chances that you can either argument informally why the problem seems to be hard, or (even better) provide a sketch of reduction from a known NP-hard problem.
Many real world problems eventually turn out to be NP-hard, and stackoverflow also has now and then questions about the complexity of a problem which turns out to be a difficult one (NP-hard, for instance). It is an important part of a CS professionals toolbox to be able to recognize and to argue for problems which are known to be difficult to solve.
I don't see any problem with asking something like this. Also, programmers should NOT be expected to recognize NP-complete problems by rote. They should, however, be able to identify that their algorithm is potentially slow regardless of whether a given problem is NP-complete.
Sure, why not? NP-complete doesn't mean unsolvable, it just means your solution will be slow. You may be looking to see if the candidate will choose the brute-force solution, or try a dynamic programming solution. And this type of question can lead into questions about runtime and other useful theory.
There's a category of interview questions that are illegal in some countries, usually pertaining to personal details that are none of the employer's business. That aside, any question is fair game if the interviewer feels it'll help get an idea of the interviewee's capabilities!
If you're hiring for a position that calls for a thinker rather than just a code monkey, it may be useful to throw this kind of problem at the applicant. Who cares if a problem is "well known" to be NP? If the guy is good he'll come to that understanding in analyzing the problem. That may well be the result the interviewer wants to see, or the applicant can go on to do some more pre-analysis and describe how he'd brute-force the problem, or what optimizations he can think to apply to make it more manageable.
It's good to ask a question that is hard to answer, to see how a programmer reasons through a problem.
But it all depends on how the interviewer asks the question, and prompts the programmer towards a solution if they aren't a mathematical genius (i.e. to see how they reason, and how they react to questions like "that's a good start, but what if...") rather than to detect if they are autistic and can provide an optimal solution in 4.3 seconds).
It's worth remembering that interviews are highly stressful affairs in which many people find such questions very difficult to answer well - a much simpler question will usually suffice without putting the interviewee under undue stress/pressure.
If you do it to deliberately try to see how they deal with stress is just stupid - that isn't the sort of stress a programmer has to deal with in their job, so you're not testing anything worthwhile.
I think it's valid to ask a question you know the interviewee won't know the answer to.
Everyone encounters problems they don't know the answer to. This type of question will give you insight as to what the interviewee's internal process is. If they logically conclude things and start to formulate a correct answer, even if it's not the best dynamic programming algorithm for it, it shows that they can reason well and discover an answer.
Also, since they likely don't know everything about the problem, this sort of question lets you see how comfortable the interviewee is with asking for help or clarification.
I think the best way to answer this type of question is to ask for any clarifications if something is missing or not well known, and then postulate an answer, pointing out why you think it is correct, and why it likely isn't the best solution.
I don't see a problem with this, but I do somewhat question the usefulness of these sorts of questions in interviews in general.
The benefit of asking questions like this, as an interviewer, is see how the person approaches a problem, and how they think. If you tell them to talk it out, you can find out quite a bit about how they will approach a difficult problem.
That being said, during an interview, most people aren't at their best - so throwing something that's somewhat "tricky" like this is often overkill, IMO.
It's sort of mean to ask nigh-impossible questions without informing the interviewee of it, but in observed problem solving, the question is often asked so that you may demonstrate critical thinking skills, how you approach problem solving, and how you handle pressure or failure.
I've been asked interview questions I couldn't solve, and I don't think I've ever "failed" an interview because of it.
No, it's rude and a sign that the interviewer just likes being in a position of power. Haha, peon! I know the answer, and you don't! And boy, do I love to make you squirm trying to come up with it!
About the only way it could be even slightly valid as a useful interview question is if it were a well-known question or one that was somehow obviously NP-complete, and asked in a way that encouraged discussion of feasibility.
Is it fair to ask in an interview how to factorise numbers?
That's not known to be NP-C, but no polynomial-time solution[*] is known, so it is certainly not known to be in P.
I think the answer to both my question, and the original question, is "yes", and for the same reasons. Some problems have no solution which scales well, but do need to be solved anyway for certain inputs. If you need programmers who can handle such problems, there's a good way to let them prove it in interview, and that's to pitch them one and see whether they freak out.
If someone claims a CompSci background, then they should even be able to provide good solutions to certain NP-C problems on demand, such as solving the knapsack problem with dynamic programming. I would consider it pointless asking an applicant for a programming job to take a problem they've never seen before, and actually prove it NP-complete (for example by reducing knapsack to the specified problem). You don't need very many programmers per company who can do that (usually 0), and all you'll likely discover is how long the candidate keeps at it before attempting to change the subject and do something more valuable with the interview time...
[*] polynomial in the size of the input in bits, that is. You often see people discussing algorithmic complexity of integer problems like factorisation in terms of the size of the number represented by the input, e.g. "sqrt(N) trial divisions". But that's not how NP and NP-C are defined.
That is evil!
If the interviewer asks an NP-complete question in an interviewer the only response they can reasonably expect is that the interviewee respond with a proof that the problem is NP-complete. In a low-stress environment like a university homework question, this usually takes a bright student 2-3 or more hours to prove. The proof itself can take several pages to write out completely, perhaps several hours of work itself. In a high-stress environment like an interview you can expect that the interviewee may not even recognize that this is np-complete.
The only reasonable alternative is that the interviewee produce an approximation algorithm; however, in this case the interviewer should make it explicitly clear that they are fine with approximations.
Even so, most approximations only come with an order of 2 of the correct answer.
I guess there is 1 more alternative: the interviewee suggests that a search-type algorithm maybe the most suitable (take for example the integer-domain optimization problem which is NP-complete, most approximation algorithms use a branch and bound search spin on the simplex algorithm to produce decent results.)
There is nothing wrong with giving an NP-complete problem as a programming challenge during an interview. I only see something wrong with expecting to find a polynomial-time solution to the problem during the interview.
An interviewer should want to see how a candidate deals with a variety of situations -- including situations that the candidate can't find an easy solution to. "Impossible" questions show how the candidate reacts when there's no simple solution. Does the candidate just give up? How many different attempts does the candidate search? How far-reaching are the solutions tried? When does the candidate ask for help -- and how? Does the candidate complain that the problem "isn't fair"?
In short, such an interview question isn't about solving P=NP... it's a psychological answer.
I prefer asking them to prove that P != NP or P == NP. Someday a candidate will answer it, I'll steal their answer and be famous!
On a more serious note, though, I think it's completely fair. Most NP complete problems are easy to solve, they just run very slowly. Unless the job requires them to know a lot about complexity theory, though, all they need to demonstrate is that they understand the solution will be slow. Bonus points if they know it's non-polynomial time, gold star if they know it's NP complete.
If such a question was given before an interview(to be answered at an interview) I would say it's ok.. but to just solve such a difficult problem as that on the spot is definitely not going to be done well by any programmer, and if the programmer does do it well that just means they can act on the spot(which isn't always the best thing for programming as designing things needs to take time and check every possible flaw) or that they have seen a similar problem before.
Edit:
Or possibly discussion about the problem would be good, like say laying down a plan of action whether or not you completely solve it.. and discussing how feasible and if there is a fast(but difficult) way to do it and such. I would not say that the interviewee should have to write down over 50 lines of C code in an interview to solve it though

Improve algorithmic thinking [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I was thinking about ways to improve my ability to find algorithmic solutions to a problem.I have thought of solving math problems from various math sectors such as discrete mathematics or linear algebra.After "googling" a bit I have read an article that claimed the need of learning game programming in order to achieve this and it seems logical to me.
Do you have/had the same concerns as me or do you have any ideas on this?I am looking forward to hear them.
Thank you all, in advance.
P.S.1:I want to say that I already know about programming and how to program(although I am at amateur level:-) ) and I just want to improve at the specific issue, NOT to start learning it
P.S.2:I think that its a useful topic for future reference so I checked the community wiki box.
Solve problems on a daily basis. Watch traffic lights and ask yourself, "How can these be synced to optimize the flow of traffic? Or to optimize the flow of pedestrians? What is the best solution for both?". Look at elevators and ask yourself "Why should these elevators use different rules than the elevators in that other building I visited yesterday? How is it actually implemented? How can it be improved?".
Try to see a problem everywhere, even if it is solved already. Reflect on the solution. Ask yourself why your own superior solution probably isn't as good as the one you can see - what are you missing?
And so on. Every day. All of the time.
The idea is that almost everything can be viewed as an algorithm (a goal that has some kind of meaning to somebody, and a method with which to achieve it). Try to have that in mind next time you watch a gameshow on TV, or when you read the news coverage of the latest bank robbery. Ask yourself "What is the goal?", "Whose goal is it?" and "What is the method?".
It can easily be mistaken for critical thinking but is more about questioning your own solutions, rather than the solutions you try to understand and improve.
First of all, and most important: practice. Think of solutions to everything everytime. It doesn't have to be on your computer, programming. All algorithms will do great. Like this: when you used to trade cards, how did you compare your deck and your friend's to determine the best way for both of you to trade? How can you define how many trades can you do to do the maximum and yet don't get any repeated card?
Use problem databases and online judges like this site, http://uva.onlinejudge.org/index.php, that has hundreds of problems concerning general algorithms. And you don't need to be an expert programmer at all to solve any of them. What you need is a good ability with logic and math. There, you can find problems from the simplest ones to the most challenging. Most of them come from Programming Marathons.
You can, then, implement them in C, C++, Java or Pascal and submit them to the online judge. If you have a good algorithm, it will be accepted. Else, the judge will say your algorithm gave the wrong answer to the problem, or it took too long to solve.
Reading about algorithms helps, but don't waste too much time on it... Reading won't help as much as trying to solve the problems by yourself. Maybe you can read the problem, try to figure out a solution for yourself, compare with the solution proposed by the source and see what you missed. Don't try to memorize them. If you have the concept well learned, you can implement it anywhere. Understanding is the hardest part for most of them.
Polya's "How To Solve It" is a great book for thinking about how to solve mathematical problems and do proofs, and I'd recommend it for anyone who does problem solving.
But! It doesn't really address the excitement that happens when the real world provides input to your system, via channel noise, user wackiness, other programs grabbing resources, etc. For that it is worth looking at algorithms that get applied to real-world input (obligatory and deserved nod to Knuth's collection), and systems which are fairly robust in the face of same (TCP, kernel internals). Part of coming up with good algorithmic solutions is to know what already exists.
And alongside reading all that, of course practice practice practice.
You should check out Mathematics and Plausible Reasoning by G. Polya. It is a rare math book, which actually deals with the thought process involved in making mathematical discoveries. I think it is the same thought process that is involved in coming up with algorithms.
The saying "practice makes perfect" definitely applies. I'm tutoring a friend of mine in programming, and I remind him that "if you don't know how to ride a bike, you could read every book about it but it doesn't mean you'll be better than Lance Armstrong tomorrow - you have to practice".
In your case, how about trying the problems in Project Euler? http://projecteuler.net
There are a ton of problems there, and for each one you could practice at developing an algorithm. Once you get a good-enough implementation, you can access other people's solutions (for a particular problem) and see how others have done it. Don't think of it as math problems, but rather as problems in creating algorithms for solving math problems.
In university, I actually took a class in algorithm design and analysis, and there is definitely a lot of theory behind it. You may hear people talking about "big-O" complexity and stuff like that - there are quite a lot of different properties about algorithms themselves which can lead to greater understanding of what constitutes a "good" algorithm. You can study quite a bit in this regard as well for the long-term.
Check some online judges, TopCoder (algorithm tutorials). Take some algorithms book (CLRS, Skiena) and do harder exercises. Practice much.
I would suggest this path for you :
1.First learn elementary parts of a language.
2.Then learn about some basic maths.
3.Move to topcoder div2 easy problems.Usually if you cannot score 250 pts. in any given day,then it means you need a lot of practise,keep practising.
4.Now's the time to learn some tools of a programmer,take a good book like Algorithm Design Manual by Steven Skienna and learn about dynamic programming and greedy approach.
5.Now move to marathons,don't be discouraged if you cannot solve it quickly.Improvement will not happen overnight,you will have to patiently keep on working hard.
6.Continue step 5 from now on and you will be a better programmer.
Learning about game programming will probably lead you to good algorithms for game programming, but not necessarily to better algorithms in general.
It's a good start, but I think that the best way to learn and apply algorithmic knowledge is
Learn about good algorithms that currently exist for your area of interest
Expand your knowledge by viewing other areas; for example, what kinds of algorithms are
required when working on genetic analysis? What's the best approach for determining
run-off potential as it relates to flooding?
Read about problems in other domains and attempt to use the algorithms that you're
familiar with to see if they fit. If they don't try to break the problem down and see if
you can come up with your own algorithm.
A few more books worth reading (in no particular order):
Aha! Insight (Martin Gardner)
Any of the Programming Pearls books (Jon Bentley)
Concrete Mathematics (Graham, Knuth, and Patashnik)
A Mathematical Theory of Communication (Claude Shannon)
Of course, most of those are just samples -- other books and papers by the same authors are usually quite good as well (e.g. Shannon wrote a lot that's well worth reading, and far too few people give it the attention it deserves).
Read SICP / Structure and Interpretation of Computer Programs and work all the problems; then read The Art of Computer Programming (all volumes), working all the exercises as you go; then work through all the problems at Project Euler.
If you aren't damned good at algorithms after that, there is probably no hope for you. LOL!
P.S. SICP is available freely online, but you have to buy AoCP (get the international, not-for-release-in-north-america edition used for 30 USD). And I haven't done this yet myself (I'm trying when I have free time).
I can recommend the book "Introductory Logic and Sets for Computer Scientists" by Nimal Nissanke (Addison Wesley). The focus is on set theory, predicate logic etc. Basically the maths of solving problems in code if you will. Good stuff and not too difficult to work through.
Good luck...Kevin
Great
how about trying the problems in Project Euler? http://projecteuler.net
There are a ton of problems there, and for each one you could practice at developing an algorithm. Once you get a good-enough implementation, you can access other people's solutions (for a particular problem) and see how others have done it. Don't think of it as math problems, but rather as problems in creating algorithms for solving math problems
Ok, so to sum up the suggestions:
The most effective way to improve this ability is to solve problem as frequently as possible.Either real world problems(such as the elevators "algorithm" which is already suggested) or exercises from books like CLRS(great, I already own it :-)).But I didn't see comments about maths and I don't know what to suppose(if you agree or not).:-s
The links were great.I will definitely use them.I also think that it will be a good exercise to solve problems from national/international informatics contests or to read the way a mathematician proves a theorem.
Thank you all again.Feel free to suggest more, although I am already satisfied with the solutions mentioned.

Resources