What is complexity of simplex algorithm for binary integer programming? - algorithm

What is complexity of simplex algorithm for binary integer programming problem? For worst case or average case?
I'm solving assignment problem.
References:
https://en.wikipedia.org/wiki/Integer_programming
https://en.wikipedia.org/wiki/Simplex_algorithm

Since it's for the assignment problem, that changes matters. In that case, as the wiki page notes, the constraint matrix is totally unimodular, which is exactly what you need to make your problem an instance of normal linear programming as well (that is, you can drop the integrality constraint, and the result will still be integral).
So, it can be solved in polynomial time. The Simplex algorithm doesn't guarantee that however.
Of course there are also other polynomial time algorithms to solve the assignment problem.

In a general sense, binary integer programming is one of Karp's 21 NP-complete problems, so assuming P!=NP it's safe to say that Simplex's worst-case running time is lower-bounded by Ω(poly(n)). Again, in general, similar to SAT solvers, the "average" case is going to be heavily dependent upon what you're taking the average across. Until you've got more specific information about the class of problems you're trying to solve with simplex, I don't think there is a good answer.
I'll do some more thinking and update when I have more information.

Related

I need to solve an NP-hard problem. Is there hope?

There are a lot of real-world problems that turn out to be NP-hard. If we assume that P ≠ NP, there aren't any polynomial-time algorithms for these problems.
If you have to solve one of these problems, is there any hope that you'll be able to do so efficiently? Or are you just out of luck?
If a problem is NP-hard, under the assumption that P ≠ NP there is no algorithm that is
deterministic,
exactly correct on all inputs all the time, and
efficient on all possible inputs.
If you absolutely need all of the above guarantees, then you're pretty much out of luck. However, if you're willing to settle for a solution to the problem that relaxes some of these constraints, then there very well still might be hope! Here are a few options to consider.
Option One: Approximation Algorithms
If a problem is NP-hard and P ≠ NP, it means that there's is no algorithm that will always efficiently produce the exactly correct answer on all inputs. But what if you don't need the exact answer? What if you just need answers that are close to correct? In some cases, you may be able to combat NP-hardness by using an approximation algorithm.
For example, a canonical example of an NP-hard problem is the traveling salesman problem. In this problem, you're given as input a complete graph representing a transportation network. Each edge in the graph has an associated weight. The goal is to find a cycle that goes through every node in the graph exactly once and which has minimum total weight. In the case where the edge weights satisfy the triangle inequality (that is, the best route from point A to point B is always to follow the direct link from A to B), then you can get back a cycle whose cost is at most 3/2 optimal by using the Christofides algorithm.
As another example, the 0/1 knapsack problem is known to be NP-hard. In this problem, you're given a bag and a collection of objects with different weights and values. The goal is to pack the maximum value of objects into the bag without exceeding the bag's weight limit. Even though computing an exact answer requires exponential time in the worst case, it's possible to approximate the correct answer to an arbitrary degree of precision in polynomial time. (The algorithm that does this is called a fully polynomial-time approximation scheme or FPTAS).
Unfortunately, we do have some theoretical limits on the approximability of certain NP-hard problems. The Christofides algorithm mentioned earlier gives a 3/2 approximation to TSP where the edges obey the triangle inequality, but interestingly enough it's possible to show that if P ≠ NP, there is no polynomial-time approximation algorithm for TSP that can get within any constant factor of optimal. Usually, you need to do some research to learn more about which problems can be well-approximated and which ones can't, since many NP-hard problems can be approximated well and many can't. There doesn't seem to be a unified theme.
Option Two: Heuristics
In many NP-hard problems, standard approaches like greedy algortihms won't always produce the right answer, but often do reasonably well on "reasonable" inputs. In many cases, it's reasonable to attack NP-hard problems with heuristics. The exact definition of a heuristic varies from context to context, but typically a heuristic is either an approach to a problem that "often" gives back good answers at the cost of sometimes giving back wrong answers, or is a useful rule of thumb that helps speed up searches even if it might not always guide the search the right way.
As an example of the first type of heuristic, let's look at the graph-coloring problem. This NP-hard problem asks, given a graph, to find the minimum number of colors necessary to paint the nodes in the graph such that no edge's endpoints are the same color. This turns out to be a particularly tough problem to solve with many other approaches (the best known approximation algorithms have terrible bounds, and it's not suspected to have a parameterized efficient algorithm). However, there are many heuristics for graph coloring that do quite well in practice. Many greedy coloring heuristics exist for assigning colors to nodes in a reasonable order, and these heuristics often do quite well in practice. Unfortunately, sometimes these heuristics give terrible answers back, but provided that the graph isn't pathologically constructed the heuristics often work just fine.
As an example of the second type of heuristic, it's helpful to look at SAT solvers. SAT, the Boolean satisfiability problem, was the first problem proven to be NP-hard. The problem asks, given a propositional formula (often written in conjunctive normal form), to determine whether there is a way to assign values to the variables such that the overall formula evaluates to true. Modern SAT solvers are getting quite good at solving SAT in many cases by using heuristics to guide their search over possible variable assignments. One famous SAT-solving algorithm, DPLL, essentially tries all possible assignments to see if the formula is satisfiable, using heuristics to speed up the search. For example, if it finds that a variable is either always true or always false, DPLL will try assigning that variable its forced value before trying other variables. DPLL also finds unit clauses (clauses with just one literal) and sets those variables' values before trying other variables. The net effect of these heuristics is that DPLL ends up being very fast in practice, even though it's known to have exponential worst-case behavior.
Option Three: Pseudopolynomial-Time Algorithms
If P ≠ NP, then no NP-hard problem can be solved in polynomial time. However, in some cases, the definition of "polynomial time" doesn't necessarily match the standard intuition of polynomial time. Formally speaking, polynomial time means polynomial in the number of bits necessary to specify the input, which doesn't always sync up with what we consider the input to be.
As an example, consider the set partition problem. In this problem, you're given a set of numbers and need to determine whether there's a way to split the set into two smaller sets, each of which has the same sum. The naive solution to this problem runs in time O(2n) and works by just brute-force testing all subsets. With dynamic programming, though, it's possible to solve this problem in time O(nN), where n is the number of elements in the set and N is the maximum value in the set. Technically speaking, the runtime O(nN) is not polynomial time because the numeric value N is written out in only log2 N bits, but assuming that the numeric value of N isn't too large, this is a perfectly reasonable runtime.
This algorithm is called a pseudopolynomial-time algorithm because the runtime O(nN) "looks" like a polynomial, but technically speaking is exponential in the size of the input. Many NP-hard problems, especially ones involving numeric values, admit pseudopolynomial-time algorithms and are therefore easy to solve assuming that the numeric values aren't too large.
For more information on pseudopolynomial time, check out this earlier Stack Overflow question about pseudopolynomial time.
Option Four: Randomized Algorithms
If a problem is NP-hard and P ≠ NP, then there is no deterministic algorithm that can solve that problem in worst-case polynomial time. But what happens if we allow for algorithms that introduce randomness? If we're willing to settle for an algorithm that gives a good answer on expectation, then we can often get relatively good answers to NP-hard problems in not much time.
As an example, consider the maximum cut problem. In this problem, you're given an undirected graph and want to find a way to split the nodes in the graph into two nonempty groups A and B with the maximum number of edges running between the groups. This has some interesting applications in computational physics (unfortunately, I don't understand them at all, but you can peruse this paper for some details about this). This problem is known to be NP-hard, but there's a simple randomized approximation algorithm for it. If you just toss each node into one of the two groups completely at random, you end up with a cut that, on expectation, is within 50% of the optimal solution.
Returning to SAT, many modern SAT solvers use some degree of randomness to guide the search for a satisfying assignment. The WalkSAT and GSAT algorithms, for example, work by picking a random clause that isn't currently satisfied and trying to satisfy it by flipping some variable's truth value. This often guides the search toward a satisfying assignment, causing these algorithms to work well in practice.
It turns out there's a lot of open theoretical problems about the ability to solve NP-hard problems using randomized algorithms. If you're curious, check out the complexity class BPP and the open problem of its relation to NP.
Option Five: Parameterized Algorithms
Some NP-hard problems take in multiple different inputs. For example, the long path problem takes as input a graph and a length k, then asks whether there's a simple path of length k in the graph. The subset sum problem takes in as input a set of numbers and a target number k, then asks whether there's a subset of the numbers that dds up to exactly k.
Interestingly, in the case of the long path problem, there's an algorithm (the color-coding algorithm) whose runtime is O((n3 log n) · bk), where n is the number of nodes, k is the length of the requested path, and b is some constant. This runtime is exponential in k, but is only polynomial in n, the number of nodes. This means that if k is fixed and known in advance, the runtime of the algorithm as a function of the number of nodes is only O(n3 log n), which is quite a nice polynomial. Similarly, in the case of the subset sum problem, there's a dynamic programming algorithm whose runtime is O(nW), where n is the number of elements of the set and W is the maximum weight of those elements. If W is fixed in advance as some constant, then this algorithm will run in time O(n), meaning that it will be possible to exactly solve subset sum in linear time.
Both of these algorithms are examples of parameterized algorithms, algorithms for solving NP-hard problems that split the hardness of the problem into two pieces - a "hard" piece that depends on some input parameter to the problem, and an "easy" piece that scales gracefully with the size of the input. These algorithms can be useful for finding exact solutions to NP-hard problems when the parameter in question is small. The color-coding algorithm mentioned above, for example, has proven quite useful in practice in computational biology.
However, some problems are conjectured to not have any nice parameterized algorithms. Graph coloring, for example, is suspected to not have any efficient parameterized algorithms. In the cases where parameterized algorithms exist, they're often quite efficient, but you can't rely on them for all problems.
For more information on parameterized algorithms, check out this earlier Stack Overflow question.
Option Six: Fast Exponential-Time Algorithms
Exponential-time algorithms don't scale well - their runtimes approach the lifetime of the universe for inputs as small as 100 or 200 elements.
What if you need to solve an NP-hard problem, but you know the input is reasonably small - say, perhaps its size is somewhere between 50 and 70. Standard exponential-time algorithms are probably not going to be fast enough to solve these problems. What if you really do need an exact solution to the problem and the other approaches here won't cut it?
In some cases, there are "optimized" exponential-time algorithms for NP-hard problems. These are algorithms whose runtime is exponential, but not as bad an exponential as the naive solution. For example, a simple exponential-time algorithm for the 3-coloring problem (given a graph, determine if you can color the nodes one of three colors each so that no edge's endpoints are the same color) might work checking each possible way of coloring the nodes in the graph, testing if any of them are 3-colorings. There are 3n possible ways to do this, so in the worst case the runtime of this algorithm will be O(3n · poly(n)) for some small polynomial poly(n). However, using more clever tricks and techniques, it's possible to develop an algorithm for 3-colorability that runs in time O(1.3289n). This is still an exponential-time algorithm, but it's a much faster exponential-time algorithm. For example, 319 is about 109, so if a computer can do one billion operations per second, it can use our initial brute-force algorithm to (roughly speaking) solve 3-colorability in graphs with up to 19 nodes in one second. Using the O((1.3289n)-time exponential algorithm, we could solve instances of up to about 73 nodes in about a second. That's a huge improvement - we've grown the size we can handle in one second by more than a factor of three!
As another famous example, consider the traveling salesman problem. There's an obvious O(n! · poly(n))-time solution to TSP that works by enumerating all permutations of the nodes and testing the paths resulting from those permutations. However, by using a dynamic programming algorithm similar to that used by the color-coding algorithm, it's possible to improve the runtime to "only" O(n2 2n). Given that 13! is about one billion, the naive solution would let you solve TSP for 13-node graphs in roughly a second. For comparison, the DP solution lets you solve TSP on 28-node graphs in about one second.
These fast exponential-time algorithms are often useful for boosting the size of the inputs that can be exactly solved in practice. Of course, they still run in exponential time, so these approaches are typically not useful for solving very large problem instances.
Option Seven: Solve an Easy Special Case
Many problems that are NP-hard in general have restricted special cases that are known to be solvable efficiently. For example, while in general it’s NP-hard to determine whether a graph has a k-coloring, in the specific case of k = 2 this is equivalent to checking whether a graph is bipartite, which can be checked in linear time using a modified depth-first search. Boolean satisfiability is, generally speaking, NP-hard, but it can be solved in polynomial time if you have an input formula with at most two literals per clause, or where the formula is formed from clauses using XOR rather than inclusive-OR, etc. Finding the largest independent set in a graph is generally speaking NP-hard, but if the graph is bipartite this can be done efficiently due to König’s theorem.
As a result, if you find yourself needing to solve what might initially appear to be an NP-hard problem, first check whether the inputs you actually need to solve that problem on have some additional restricted structure. If so, you might be able to find an algorithm that applies to your special case and runs much faster than a solver for the problem in its full generality.
Conclusion
If you need to solve an NP-hard problem, don't despair! There are lots of great options available that might make your intractable problem a lot more approachable. No one of the above techniques works in all cases, but by using some combination of these approaches, it's usually possible to make progress even when confronted with NP-hardness.

Why using heuristics in an algorithm takes away asymptotic optimality?

I was reading about some geometric routing algorithms, there it says that when employing heuristics in a version of the main algorithm it may improve performance, but takes away asymptotic optimality.
Why is that the case? Should we prefer asymptotic optimality over better performance? Are there prototypical cases where one should prefer asymptotic optimality? Are there any benchmarks known?
I think you are asking about optimization problems where heuristics run fast but might not achieve the totally optimal solution, whereas truly optimal solution finding algorithms can run much slower in the worst-case although they always give the totally optimal solution. If so, here's some info. In general, the decision to use a heuristic algorithm often depends on how well it approximates the optimal solution "in practice", and if this typical solution quality is good enough for you, and whether or not you think your particular problem instance falls into the category of the problems that are encountered in practice. If you are interested, you can look up approximation algorithms for NP-complete problems. There are some problems where the score of the solution found by a heuristic is within a constant multiplier (1 + epsilon) of the score of the optimal solution, and you can choose epsilon; however typically the running time increases as epsilon decreases.
My guess is that they are talking about use of (non-admissible) heuristics for approximation algorithms. For instance, the traveling salesman problem is NP-complete, yet there are heuristic approximation methods that are much faster than known algorithms for NP-complete problems but are only guaranteed to get within a few percent of optimal.

Can 1 approximation algorithm be used for multiple NP-Hard problems?

Since any NP Hard problem be reduced to any other NP Hard problem by mapping, my question is 1 step forward;
for example every step of that algo : could that also be mapped to the other NP hard?
Thanks in advance
From http://en.wikipedia.org/wiki/Approximation_algorithm we see that
NP-hard problems vary greatly in their approximability; some, such as the bin packing problem, can be approximated within any factor greater than 1 (such a family of approximation algorithms is often called a polynomial time approximation scheme or PTAS). Others are impossible to approximate within any constant, or even polynomial factor unless P = NP, such as the maximum clique problem.
(end quote)
It follows from this that a good approximation in one NP-complete problem is not necessarily a good approximation in another NP-complete problem. In that fortunate world we could use easily-approximated NP-complete problems to find good approximate algorithms for all other NP-complete problems, which is not the case here, as there are hard-to-approximate NP-complete problems.
When proving a problem is NP-Hard, we usually consider the decision version of the problem, whose output is either yes or no. However, when considering approximation algorithms, we consider the optimization version of the problem.
If you use one problem's approximation algorithm to solve another problem by using the reduction in the proof of NP-Hard, the approximation ratio may change. For example, if you have a 2-approximation algorithm for problem A and you use it to solve problem B, then you may get a O(n)-approximation algorithm for problem B, since the reduction does not preserve approximation ratio. Hence, if you want to use an approximation algorithm for one problem to solve another problem, you need to ensure that the reduction will not change approximation ratio too much in order to get a useful algorithm. For example, you can use L-reduction or PTAS reduction.

Does Integer Linear Programming give optimal solution?

Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I am trying to implement a solution to a problem using Integer linear programming (ILP). As the problem is NP-hard , I am wondering if the solution provided by Simplex Method would be optimal ? Can anyone comment on the optimality of ILP using Simplex Method or point to some source. Is there any other algorithm that can provide optimal solution to the ILP problem?
EDIT: I am looking for yes/no answer to the optimality of the solution obtained by any of the algorithms (Simplex Method, branch and bound and cutting planes) for ILP.
The Simplex Method doesn't handle the constraint that you want integers. Simply rounding the result is not guaranteed to give an optimal solution.
Using the Simplex Method to solve an ILP problem does work if the constraint matrix is totally dual integral.
Some algorithms that solve ILP (not constrained to totally dual integral constraint matrixes) are Branch and Bound, which is simple to implement and generally works well if the costs are reasonably uniform (very non-uniform costs make it try many attempts that look promising at first but turn out not to be), and Cutting Plane, which I honestly don't know much about but it's probably good because people are using it.
The solution set for a linear programming problem is optimal by definition.
Linear programming is a class of algorithms known as "constraint satisfaction". Once you have satisfied the constraints you have solved the problem and there is no "better" solution, because by definition the best outcome is to satisfy the constraints.
If you have not completely modeled the problem, however, then obviously some other type of solution may be better.
Clarification: When I write above "satisfy the constraints", I am including maximization of objective function. The cutting plane algorithm is essentially an extension of the simplex algorithm.

What would a P=NP proof be like, hypothetically?

Would it be an polynomial time algorithm to a specific NP-complete problem, or just abstract reasonings that demonstrate solutions to NP-complete problems exist?
It seems that the a specific algoithm is much more helpful. With it, all we'll have to do to polynomially solve an NP problem is to convert it into the specific NP-complete problem for which the proof has a solution, and we are done.
P = NP: "The 3SAT problem is a classic NP complete problem. In this proof, we demonstrate an algorithm to solve it that has an asymptotic bound of (n^99 log log n). First we ..."
P != NP: "Assume there was a polynomial algorithm for the 3SAT problem. This would imply that .... which by ..... implies we can do .... and then ... and then ... which is impossible. This was all predicated on a polynomial time algorithm for 3SAT. Thus P != NP."
UPDATE: Perhaps something like this paper (for P != NP).
UPDATE 2: Here's a video of Michael Sipser sketching out a proof for P != NP
Call me pessimistic, but it will be like this:
...
∴, P ≠ NP
QED
There are some meta-results about what a P=NP or P≠NP proof can not look like. The details are quite technical, but it is known that the proof cannot be
relativizing, which kind of means that the proof must make use of the exact definition of Turing machine used, because with some modifications ("oracles", like very powerful CISC instructions added to the instruction set) P=NP, and with some other modifications, P≠NP. See also this blog post for a nice explanation of relativization.
natural, a property of several classic circuit complexity proofs,
or algebrizing, a generalization of relativizing.
It could take the form of demonstrating that assuming P ≠ NP leads to a contradiction.
It might not be connected to P and NP in a straightforward way... Many theorems now are based on P!=NP, so proving one assumed fact to be untrue would make a big difference. Even proving something like constant ratio approximation for TS should be enough IIRC. I think, existence of NPI (GI) and other sets is also based on P!=NP, so making any of them equal to P or NP might change the situation completely.
IMHO everything happens now on a very abstract level. If someone proves anything about P=/!=NP, it doesn't have to mention any of those sets or even a specific problem.
Probably it would be in the form of a reduction from an NP problem to a P problem. See the Wikipedia page on reductions.
OR
Like this proof proposed by Vinay Deolalikar.
The most straightforward way is to prove that there is a polynomial time solution to the problems in the class NP-complete. These are problems that are in NP and are reducable to one of the known np problem. That means you could give a faster algorithm to prove the original problem posed by Stephen Cook or many others which have also been shown to be NP-Complete. See Richard Karp's seminal paper and this book for more interesting problems. It has been shown that if you solve one of these problems the entire complexity class collapses. edit: I have to add that i was talking to my friend who is studying quantum computation. Although I had no clue what it means, he said that a certain proof/experiment? in the quantum world could make the entire complexity class, i mean the whole thing, moot. If anyone here knows more about this, please reply.
There have also been numerous attempts to the problem without giving a formal algorithm. You could try to count the set. Theres the Robert/Seymore proof. People have also tried to solve it using the tried and tested diagonlization proof(also used to show that there are problems that you can never solve). Razborov also showed that if there are certain one-way functions then any proof cannot give a resolution. That means that new techniques will be required in order to solve this question.
Its been 38 years since the original paper has been published and there still is no sign of a proof. Not only that but lot of problems that mathematicians had been posing before the notion of complexity classes came in has been shown to be NP. Therefor many mathematicians and computer scientists believe that some of the problems are so fundamental that a new kind of maths may be needed to solve the problem. You have to keep in mind that the best minds human race has to offer have tackled this problem without any success. I think it should be at least decades before somebody cracks the puzzle. But even if there is a polynomial time solution the constants or the exponent could be so large that it would be useless in our problems.
There is an excellent survey available which should answer most of your questions: http://www.scottaaronson.com/papers/pnp.pdf.
Certainly a descriptive proof is the most useful, but there are other categories of proof: it is possible, for example, to provide 'existence proofs' that demonstrate that it is possible to find an answer without finding (or, sometimes, even suggesting how to find) that answer.
Set N equal to the multiplicative identity. Then NP = P. QED. ;-)
It would likely look almost precisely like one of these
Good question; it could take either form. Obviously, the specific algorithm would be more helpful, yes, but there's no determining that that would be the way that a theoretical P=NP proof would occur. Given that the nature of NP-complete problems and how common they are, it would seem that more effort has been put into solving those problems than has been put into solving the theoretical reasoning side of the equation, but that's just supposition.
Any nonconstructive proof that P=NP really is not. It would imply that the following explicit 3-SAT algorithm runs in polynomial time:
Enumerate all programs. On round i, run all programs numbered
less than i for one step. If
a program terminates with a
satisfying input to the formula, return true. If a program
terminates with a formal proof that
no such input exists, return
false.
If P=NP, then there exists a program which runs in O(poly(N)) and outputs a satisfying input to the formula, if such a formula exists.
If P=coNP, there exists a program which runs in O(poly(N)) and outputs a formal proof that no formula exists, if no formula exists.
If P=NP, then since P is closed under complement NP=coNP. So, there exists a program which runs in O(poly(N)) and does both. That program is the k'th program in the enumeration. k is O(1)! Since it runs in O(poly(N)) our brute force simulation only requires
k*O(poly(N))+O(poly(N))^2
rounds once it reaches the program in question. As such, the brute force simulation runs in polynomial time!
(Note that k is exponential in the size of the program; this approach is not really feasible, but it suggests that it would be hard to do a nonconstructive proof that P=NP, even if it were the case.)
An interesting read that is somewhat related to this
To some extent, the form such a proof needs to have depends on your philosophical point of view (= the axioms you deem to be true) - e.g., as a contructivist you would demand the construction of an actual algorithm that requires polynomial time to solve an NP-complete problem. This could be done by using reduction, but not with an indirect proof. Anyhow, it really seems to be very unlikely :)
The proof would deduce a contradiction from to the assumption that at least one element (problem) of NP isn't also an element of P.

Resources