Is Computationally-hard is same as NP-hard? - computation-theory

I want to know that is there any difference between NP- hard problems and Computationally hard problems or are these two terms used for the same thing? I have tried to search the solution but cannot get some reasonable answer. Can anybody please help?

As of current knowledge (until that P=NP question is answered):
All NP-hard problems are computationally hard. But not all computationally hard problems are NP-hard (problems in P, with high exponents in the polynome, for example).
Note that "NP-hard" is a well defined class of problems in computer science.
"Computationally hard" on the other hand isn't, as far as I know.

Related

Algorithm to do Minimization in Integer Programming

I understand that doing minimization in integer programming is a very complex problem. But what makes this problem so difficult?
If I were to (attempt) to write an algorithm to solve it, what would I need to take into account? I'm only familiar with the branch-and-bound technique for solving it and I'm wondering what sort of roadblocks I will face when attempting to apply this technique programatically.
I'm wondering what sort of roadblocks I will face when attempting to apply this technique programatically.
None in particular (assuming a fairly straightforward implementation without a lot of tricks). The algorithms aren’t complicated – they are complex, that’s a fundamental difference.
Techniques such as branch and bound or branch and cut try to prune the search tree and thus speed up the running time. But the whole problem tree is nevertheless exponentially large, hence the problem.
Like the other said, those problem are very hard and there are no simple solution nor simple algorithm that apply to all classes of problems.
The "classic" way of solving those problem is to do a branch-and-bound and apply the simplex algorithm at each node, as you say in your question. However, I would not recommand implementing this yourself if you are not an expert.
As for a lot of numerical methods, it is very hard to get it right (good parameter values, good optimisations), and a lot have been done (see CPLEX, COIN_OR, etc).
It's not that you can't do it: the branch-and-bound part is pretty straigtforward, but without all the tricks your program will be really slow.
Also, you will need a simplex implementation and this is not something you want to do yourself: you will have to use a third-part lib anyway.
Most likely, wether
if your data set is not that big (try it !), and you are not interested in solving it really fast: use something like COIN-OR or lp_solve with the default method, it will work;
if your data set is really big (and/or you need to find a solution quickly each time), you need to work with an expert in this field.
My main point is that only experienced people will know which algorithm will perform better on your problem, wich form of the model will be the easiest to solve, which method to apply and what kind of optimisations you can try.
If you are interested in those problems, I would recommend this book for an introduction to the math behind all this (with a lot of examples). It is incredibly expansive, so you may want to go to a library instead of buying it: Nemhauser and Wolsey.
Integer programming is NP-hard. That's why it is so difficult.
There is a tutorial that you might be interested.
The first thing you do before you solve any mathematical optimization problem is you categorize it. Except special cases, most of the time, integer programming problems will be np-hard. So instead of using an "algorithm", you will use a "heuristic". The final solution you will find will not be a guaranteed optimum, but it will be a pretty good solution for real life problems.
Your main roadblock will your programming skills. Heuristic programming requires a good level of programming understanding. So instead of programming your own heuristic you are better of using well known package (eg, COIN-OR, free). This way you can focus on your problem instead of the heuristic.

What makes a problem more fundamental than another?

Is there any formal definition for what makes a problem more fundamental than another? Otherwise, what would be an acceptable informal definition?
An example of a problem that is more fundamental than another would be sorting vs font rendering.
When many problems can be solved using one algorithm, for instance. This is the case for any optimal algorithm for sorting. BTW, perhaps you're mixing problems and algorithms? There is a formal definition of one problem being reducible to another. See http://en.wikipedia.org/wiki/Reduction_(complexity)
The original question is a valid one, and does not have to assume/consider complexity and reducibility as #slebetman suggested. (Thus making the question more fundamental :)
If we attempt a formal definition, we could have this one: Problem P1 is more fundamental than problem P2, if a solution to P1 affects the outcome of a wider set of other problems. This likely implies that P1 will affect problems in different domains of computer science - and possibly beyond.
In practical terms, I would correct again #slebetman. Instead of "if something uses or challenges an assumption then it is less fundamental than that assumption", I would say "if a problem uses or challenges an assumption then it is less fundamental than the same problem without the assumption". I.e. sorting of Objects is more fundamental than sorting of Integers; or, font rendering on a printer is less fundamental than font rendering on any device.
If I understood your question right.
When you can solve the same problem by applying many algorithms, the algorithm which proves its lightweight on both of memory and CPU is considered more fundamental. And I can think of another thing which is, a fundamental algorithm will not use other algorithms, otherwise it would be a complex one.
The problem or solution that has more applications is more fundamental.
(Sorting has many appications, P!=NP too has many applications (or implications), rendeering only has a few applications.)
Inf is right when he hinted at complexity and reducibility but not just of what is involved/included in an algorithm's implementation. It's the assumptions you make.
All pieces of code are written with assumptions about the world in which it operates. Those assumptions are more fundamental than the code written with those assumptions. In short, I would say: if something uses or challenges an assumption then it is less fundamental than that assumption.
Lets take an example: sorting. The problem of sorting is that when done the naive way the time it takes to sort grows very quickly. In technical terms, the problem of sorting tends towards being NP complete. So sorting algorithms are developed with the aim of avoiding being NP complete. But is NP complete always slow? That's a problem unto itself and the problem is called P!=NP which so far has not been proven either true or false. So P!=NP is more fundamental than sorting.
But even P!=NP makes some assumptions. It assumes that NP complete problems can always be run to completion even if it takes a long time to complete. In other words, the big-O notation for an NP complete problem is never infinite. This begs the question are all problems guaranteed to have a point of completion? The answer to that is no, not all problems have a guaranteed point of completion, the most trivial of which is the infinite loop: while (1) {print "still running!"}. So can we detect all cases of programs running infinitely? No, and the proof of that is the halting problem. Hence the halting problem is more fundamental than P!=NP.
But even the halting problem makes assumptions. We assume that we are running all our programs on a particular kind of CPU. Can we treat all CPUs as equivalent? Is there a CPU out there that can be built to solve the halting problem? Well, the answer is as long as a CPU is Turing complete, they are equivalent with other CPUs that are Turing complete. Turing completeness is therefore a more fundamental problem than the halting problem.
We can go on and on and question our assumptions like: are all kinds of signals equivalent? Is it possible that a fluidic or mechanical computer be fundamentally different than an electronic computer? And it will lead us to things like Shannon's information theory and Boolean algebra etc and each assumption that we uncover are more fundamental than the one above it.

Practicing programming : solving crescent difficulty problems

As everyone knows, real life problems when it comes to programming are numerous and often unexpected. Sometimes, those problems even are hard to solve, and without being trained to recognize them, you can quickly get stuck. I like challenge, because the more you get confronted to a recurrent situation, the less time you need to come up with an efficient answer -in time complexity, for instance- when you encounter a similar issue.
This leads to my question :
Does anybody know a good book, or any kind of support, that remains language independent enough, providing problems that tends to be hard at some point, probably crescent difficulty, to practice coding. I mean, these problems that are addictive and interesting, and you feel a real achievement when you solve them. Something like, if you don't find a trick to get your algorithm time-linear, while there also is an expensive brute-force version, it will lead you to failure.
Thanks already for your suggestions.
Code Kata
High School Programming Language
France IOI (in French, but there's tons of exercises)
Have fun :)

Relating NP-Complete problems to real world problems

I have a decent grasp of NP Complete problems; that's not the issue. What I don't have is a good sense of where they turn up in "real" programming. Some (like knapsack and traveling salesman) are obvious, but others don't seem obviously connected to "real" problems.
I've had the experience several times of struggling with a difficult problem only to realize it is a well known NP Complete problem that has been researched extensively. If I had recognized the connection more quickly I could have saved quite a bit of time researching existing solutions to my specific problem.
Are there any resources (online or print) that specifically connect NP Complete to real world instances?
Edit:
For example, I was working on a program that tried to divide students into groups based on age, grade, and school of origin, which is essentially a graph partitioning problem. It took me a while to realize the connection.
I have found that Computers and Intractability is the definitive reference on this topic.
Usually the connection you are talking about must be extracted with a so-called reduction, for example you reduce 3-SAT to the problem you are working with and then you can conclude that your problem has the same complexity of it.
This passage is not trivial, since you have to prove that you can turn every problem instance l of a known NP-Hard problem L into an instance c of your problem C using a deterministic polinomyal algorithms.
So, except from learning basical correlations of common NP-Hard problems using your memory, there's no way to be sure if a problem is similar to another NP-Hard without first trying to guessing and then proving it, you have to be smart.
here is a wiki link:
http://wapedia.mobi/en/List_of_NP-complete_problems
Notice it says
This list is in no way comprehensive (there are more than 3000 known NP-complete problems)
probably it would be a great task if anyone could compile such list.
A theorist should try to understand/proof an NP-Complete/Hard problem. But, a programmer doesn't have that time to. He needs a list.
Am I correct?
I think you should google it. And, read through all the links. Add any new problem found in the link to your list.
Hope it helps
PS : Don't forget to post the list when you're finished :P
For developing better intuition the book "The Algorithm Design Manual, Second Edition" by Skiena (excerpts on google books) is simply great.
List in the back with problems
(including hard problems), that
include an illustration and a
discussion (often) with a real world
example.
Covers both the theoretical
and practical side of things, often
talking about actual code.
Read excepts online here (see some examples in chapters 14):
http://books.google.dk/books?id=7XUSn0IKQEgC&printsec=frontcover#v=onepage&q&f=false
Chapter 16 (not online) discusses some hard problems, including graph partition.

Is it correct to ask to solve an NP-complete problem on a job interview? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Today there was a question on SO, where the author was given an NP-complete problem during an interview and he obviously hadn't been told that it was one.
What is the purpose of asking such questions? What behavior does the interviewer expect when asking such things? Proof? Useful heuristics? And is it even legitimate to ask one if it's not a well-known NP-complete problem everyone should know about? (there's a plenty of them)
Completely legitimate to me. If you are Computer Science professional there are good chances that you can either argument informally why the problem seems to be hard, or (even better) provide a sketch of reduction from a known NP-hard problem.
Many real world problems eventually turn out to be NP-hard, and stackoverflow also has now and then questions about the complexity of a problem which turns out to be a difficult one (NP-hard, for instance). It is an important part of a CS professionals toolbox to be able to recognize and to argue for problems which are known to be difficult to solve.
I don't see any problem with asking something like this. Also, programmers should NOT be expected to recognize NP-complete problems by rote. They should, however, be able to identify that their algorithm is potentially slow regardless of whether a given problem is NP-complete.
Sure, why not? NP-complete doesn't mean unsolvable, it just means your solution will be slow. You may be looking to see if the candidate will choose the brute-force solution, or try a dynamic programming solution. And this type of question can lead into questions about runtime and other useful theory.
There's a category of interview questions that are illegal in some countries, usually pertaining to personal details that are none of the employer's business. That aside, any question is fair game if the interviewer feels it'll help get an idea of the interviewee's capabilities!
If you're hiring for a position that calls for a thinker rather than just a code monkey, it may be useful to throw this kind of problem at the applicant. Who cares if a problem is "well known" to be NP? If the guy is good he'll come to that understanding in analyzing the problem. That may well be the result the interviewer wants to see, or the applicant can go on to do some more pre-analysis and describe how he'd brute-force the problem, or what optimizations he can think to apply to make it more manageable.
It's good to ask a question that is hard to answer, to see how a programmer reasons through a problem.
But it all depends on how the interviewer asks the question, and prompts the programmer towards a solution if they aren't a mathematical genius (i.e. to see how they reason, and how they react to questions like "that's a good start, but what if...") rather than to detect if they are autistic and can provide an optimal solution in 4.3 seconds).
It's worth remembering that interviews are highly stressful affairs in which many people find such questions very difficult to answer well - a much simpler question will usually suffice without putting the interviewee under undue stress/pressure.
If you do it to deliberately try to see how they deal with stress is just stupid - that isn't the sort of stress a programmer has to deal with in their job, so you're not testing anything worthwhile.
I think it's valid to ask a question you know the interviewee won't know the answer to.
Everyone encounters problems they don't know the answer to. This type of question will give you insight as to what the interviewee's internal process is. If they logically conclude things and start to formulate a correct answer, even if it's not the best dynamic programming algorithm for it, it shows that they can reason well and discover an answer.
Also, since they likely don't know everything about the problem, this sort of question lets you see how comfortable the interviewee is with asking for help or clarification.
I think the best way to answer this type of question is to ask for any clarifications if something is missing or not well known, and then postulate an answer, pointing out why you think it is correct, and why it likely isn't the best solution.
I don't see a problem with this, but I do somewhat question the usefulness of these sorts of questions in interviews in general.
The benefit of asking questions like this, as an interviewer, is see how the person approaches a problem, and how they think. If you tell them to talk it out, you can find out quite a bit about how they will approach a difficult problem.
That being said, during an interview, most people aren't at their best - so throwing something that's somewhat "tricky" like this is often overkill, IMO.
It's sort of mean to ask nigh-impossible questions without informing the interviewee of it, but in observed problem solving, the question is often asked so that you may demonstrate critical thinking skills, how you approach problem solving, and how you handle pressure or failure.
I've been asked interview questions I couldn't solve, and I don't think I've ever "failed" an interview because of it.
No, it's rude and a sign that the interviewer just likes being in a position of power. Haha, peon! I know the answer, and you don't! And boy, do I love to make you squirm trying to come up with it!
About the only way it could be even slightly valid as a useful interview question is if it were a well-known question or one that was somehow obviously NP-complete, and asked in a way that encouraged discussion of feasibility.
Is it fair to ask in an interview how to factorise numbers?
That's not known to be NP-C, but no polynomial-time solution[*] is known, so it is certainly not known to be in P.
I think the answer to both my question, and the original question, is "yes", and for the same reasons. Some problems have no solution which scales well, but do need to be solved anyway for certain inputs. If you need programmers who can handle such problems, there's a good way to let them prove it in interview, and that's to pitch them one and see whether they freak out.
If someone claims a CompSci background, then they should even be able to provide good solutions to certain NP-C problems on demand, such as solving the knapsack problem with dynamic programming. I would consider it pointless asking an applicant for a programming job to take a problem they've never seen before, and actually prove it NP-complete (for example by reducing knapsack to the specified problem). You don't need very many programmers per company who can do that (usually 0), and all you'll likely discover is how long the candidate keeps at it before attempting to change the subject and do something more valuable with the interview time...
[*] polynomial in the size of the input in bits, that is. You often see people discussing algorithmic complexity of integer problems like factorisation in terms of the size of the number represented by the input, e.g. "sqrt(N) trial divisions". But that's not how NP and NP-C are defined.
That is evil!
If the interviewer asks an NP-complete question in an interviewer the only response they can reasonably expect is that the interviewee respond with a proof that the problem is NP-complete. In a low-stress environment like a university homework question, this usually takes a bright student 2-3 or more hours to prove. The proof itself can take several pages to write out completely, perhaps several hours of work itself. In a high-stress environment like an interview you can expect that the interviewee may not even recognize that this is np-complete.
The only reasonable alternative is that the interviewee produce an approximation algorithm; however, in this case the interviewer should make it explicitly clear that they are fine with approximations.
Even so, most approximations only come with an order of 2 of the correct answer.
I guess there is 1 more alternative: the interviewee suggests that a search-type algorithm maybe the most suitable (take for example the integer-domain optimization problem which is NP-complete, most approximation algorithms use a branch and bound search spin on the simplex algorithm to produce decent results.)
There is nothing wrong with giving an NP-complete problem as a programming challenge during an interview. I only see something wrong with expecting to find a polynomial-time solution to the problem during the interview.
An interviewer should want to see how a candidate deals with a variety of situations -- including situations that the candidate can't find an easy solution to. "Impossible" questions show how the candidate reacts when there's no simple solution. Does the candidate just give up? How many different attempts does the candidate search? How far-reaching are the solutions tried? When does the candidate ask for help -- and how? Does the candidate complain that the problem "isn't fair"?
In short, such an interview question isn't about solving P=NP... it's a psychological answer.
I prefer asking them to prove that P != NP or P == NP. Someday a candidate will answer it, I'll steal their answer and be famous!
On a more serious note, though, I think it's completely fair. Most NP complete problems are easy to solve, they just run very slowly. Unless the job requires them to know a lot about complexity theory, though, all they need to demonstrate is that they understand the solution will be slow. Bonus points if they know it's non-polynomial time, gold star if they know it's NP complete.
If such a question was given before an interview(to be answered at an interview) I would say it's ok.. but to just solve such a difficult problem as that on the spot is definitely not going to be done well by any programmer, and if the programmer does do it well that just means they can act on the spot(which isn't always the best thing for programming as designing things needs to take time and check every possible flaw) or that they have seen a similar problem before.
Edit:
Or possibly discussion about the problem would be good, like say laying down a plan of action whether or not you completely solve it.. and discussing how feasible and if there is a fast(but difficult) way to do it and such. I would not say that the interviewee should have to write down over 50 lines of C code in an interview to solve it though

Resources