Come up with polynomial algorithm - algorithm

Given n checks, each of arbitrary (integer) monetary value, decide if the checks can be partitioned into two parts that have the same monetary value.
I'm beyond mind blown on how to solve this. Is there an algorithm to solve this in polynomial time or is this NP-Complete?

Yes it's an NP complete problem. It's a variation of the subset sum problem.

Actually you can solve this in O(n*sum/2) using dynamic programming, first sum up all checks into a varaibel sum, then you can perform dp on the checks values (either take it or leave it or take it) and check at the end if that sum is equal to s/2.

Related

Minimum Cut in undirected graphs

I would like to quote from Wikipedia
In mathematics, the minimum k-cut, is a combinatorial optimization
problem that requires finding a set of edges whose removal would
partition the graph to k connected components.
It is said to be the minimum cut if the set of edges is minimal.
For a k = 2, It would mean Finding the set of edges whose removal would Disconnect the graph into 2 connected components.
However, The same article of Wikipedia says that:
For a fixed k, the problem is polynomial time solvable in O(|V|^(k^2))
My question is Does this mean that minimum 2-cut is a problem that belongs to complexity class P?
The min-cut problem is solvable in polynomial time and thus yes it is true that it belongs to complexity class P. Another article related to this particular problem is the Max-flow min-cut theorem.
First of all, the time complexity an algorithm should be evaluated by expressing the number of steps the algorithm requires to finish as a function of the length of the input (see Time complexity). More or less formally, if you vary the length of the input, how would the number of steps required by the algorithm to finish vary?
Second of all, the time complexity of an algorithm is not exactly the same thing as to what complexity class does the problem the algorithm solves belong to. For one problem there can be multiple algorithms to solve it. The primality test problem (i.e. testing if a number is a prime or not) is in P, but some (most) of the algorithms used in practice are actually not polynomial.
Third of all, in the case of most algorithms you'll find on the Internet evaluating the time complexity is not done by definition (i.e. not as a function of the length of the input, at least not expressed directly as such). Lets take the good old naive primality test algorithm (the one in which you take n as input and you check for division by 2,3...n-1). How many steps does this algo take? One way to put it is O(n) steps. This is correct. So is this algorithm polynomial? Well, it is linear in n, so it is polynomial in n. But, if you take a look at what time complexity means, the algorithm is actually exponential. First, what is the length of the input to your problem? Well, if you provide the input n as an array of bits (the usual in practice) then the length of the input is, roughly said, L = log n. Your algorithm thus takes O(n)=O(2^log n)=O(2^L) steps, so exponential in L. So the naive primality test is in the same time linear in n, but exponential in the length of the input L. Both correct. Btw, the AKS primality test algorithm is polynomial in the size of input (thus, the primality test problem is in P).
Fourth of all, what is P in the first place? Well, it is a class of problems that contains all decision problems that can be solved in polynomial time. What is a decision problem? A problem that can be answered with yes or no. Check these two Wikipedia pages for more details: P (complexity) and decision problems.
Coming back to your question, the answer is no (but pretty close to yes :p). The minimum 2-cut problem is in P if formulated as a decision problem (your formulation requires an answer that is not just a yes-or-no). In the same time the algorithm that solves the problem in O(|V|^4) steps is a polynomial algorithm in the size of the input. Why? Well, the input to the problem is the graph (i.e. vertices, edges and weights), to keep it simple lets assume we use an adjacency/weights matrix (i.e. the length of the input is at least quadratic in |V|). So solving the problem in O(|V|^4) steps means polynomial in the size of the input. The algorithm that accomplishes this is a proof that the minimum 2-cut problem (if formulated as decision problem) is in P.
A class related to P is FP and your problem (as you formulated it) belongs to this class.

Finding maximum subsequence below or equal to a certain value

I'm learning dynamic programming and I've been having a great deal of trouble understanding more complex problems. When given a problem, I've been taught to find a recursive algorithm, memoize the recursive algorithm and then create an iterative, bottom-up version. At almost every step I have an issue. In terms of the recursive algorithm, I write different different ways to do recursive algorithms, but only one is often optimal for use in dynamic programming and I can't distinguish what aspects of a recursive algorithm make memoization easier. In terms of memoization, I don't understand which values to use for indices. For conversion to a bottom-up version, I can't figure out which order to fill the array/double array.
This is what I understand:
- it should be possible to split the main problem to subproblems
In terms of the problem mentioned, I've come up with a recursive algorithm that has these important lines of code:
int optionOne = values[i] + find(values, i+1, limit - values[i]);
int optionTwo = find(values, i+1, limit);
If I'm unclear or this is not the correct qa site, let me know.
Edit:
Example: Given array x: [4,5,6,9,11] and max value m: 20
Maximum subsequence in x under or equal to m would be [4,5,11] as 4+5+11 = 20
I think this problem is NP-hard, meaning that unless P = NP there isn't a polynomial-time algorithm for solving the problem.
There's a simple reduction from the subset-sum problem to this problem. In subset-sum, you're given a set of n numbers and a target number k and want to determine whether there's a subset of those numbers that adds up to exactly k. You can solve subset-sum with a solver for your problem as follows: create an array of the numbers in the set and find the largest subsequence whose sum is less than or equal to k. If that adds up to exactly k, the set has a subset that adds up to k. Otherwise, it does not.
This reduction takes polynomial time, so because subset-sum is NP-hard, your problem is NP-hard as well. Therefore, I doubt there's a polynomial-time algorithm.
That said - there is a pseudopolynomial-time algorithm for subset-sum, which is described on Wikipedia. This algorithm uses DP in two variables and isn't strictly polynomial time, but it will probably work in your case.
Hope this helps!

Can I use the Hungarian algorithm to find max cost?

The Hungarian algorithm solves the assignment problem in polynomial time. Given workers and tasks, and an n×n matrix containing the cost of assigning each worker to a task, it can find the cost minimizing assignment.
I want to find the choice for which cost is max? Can I do it using Hungarian or any similar method? Or this can only be done exponentially?
Wikipedia says:
If the goal is to find the assignment that yields the maximum cost,
the problem can be altered to fit the setting by replacing each cost
with the maximum cost subtracted by the cost.
So if I understand correctly: among all the costs you have as input, you find the maximum value. Then you replace each cost x by max - x. This way you still have positive costs and you can run the Hungarian algorithm.
Said differently: Hungarian tries to minimize the assignment cost. So, if you are looking for the maximum, you can reverse the costs: x -> -x. However, some implementations (don't know if all or any) require positive numbers. So the idea is to add a constant value to each cost in order to have positive numbers. This constant value does not change the resulting affectation.
As David said in the comment:
Multiply the cost matrix by -1 for maximization.

Is linear-time reduction symmetric?

If a problem X reduces to a problem Y is the opposite reduction also possible? Say
X = Given an array tell if all elements are distinct
Y = Sort an array using comparison sort
Now, X reduces to Y in linear time i.e. if I can solve Y, I can solve X in linear time. Is the reverse always true? Can I solve Y, given I can solve X? If so, how?
By reduction I mean the following:
Problem X linear reduces to problem Y if X can be solved with:
a) Linear number of standard computational steps.
b) Constant calls to subroutine for Y.
Given the example above:
You can determine if all elements are distinct in O(N) if you back them up with a hash table. Which allows you to check existence in O(1) + the overhead of the hash function (which generally doesn't matter). IF you are doing a non-comparison based sort:
sorting algorithm list
Specialized sort that is linear:
For simplicity, assume you're sorting a list of natural numbers. The sorting method is illustrated using uncooked rods of spaghetti:
For each number x in the list, obtain a rod of length x. (One practical way of choosing the unit is to let the largest number m in your list correspond to one full rod of spaghetti. In this case, the full rod equals m spaghetti units. To get a rod of length x, simply break a rod in two so that one piece is of length x units; discard the other piece.)
Once you have all your spaghetti rods, take them loosely in your fist and lower them to the table, so that they all stand upright, resting on the table surface. Now, for each rod, lower your other hand from above until it meets with a rod--this one is clearly the longest! Remove this rod and insert it into the front of the (initially empty) output list (or equivalently, place it in the last unused slot of the output array). Repeat until all rods have been removed.
So given a very specialized case of your problem, your statement would hold. This will not hold in the general case though, which seems to be more what you are after. It is very similar to when people think they have solved TSP, but have instead created a constrained version of the general problem that is solvable using a special algorithm.
Suppose I can solve a problem A in constant time O(1) but problem B has a best case exponential time solution O(2^n). It is likely that I can come up with an insanely complex way of solving problem A in O(2^n) ("reducing" problem A to B) as well but if the answer to your question was "YES", I should then be able to make all exceedingly difficult problems solvable in O(1). Surely, that cannot be the case!
Assuming I understand what you mean by reduction, let's say that I have a problem that I can solve in O(N) using an array of key/value pairs, that being the problem of looking something up from a list. I can solve the same problem in O(1) by using a Dictionary.
Does that mean I can go back to my first technique, and use it to solve the same problem in O(1)?
I don't think so.

Is there any algorithm to solve counting change for finite number of denominations?

I know the algorithm to solve the coin change problem for infinite number of denominations but is there any algorithm for finite number of denominations using DP?
Yes. Modify the initial algorithm such that, when it's about to add a coin that would exceed the number of available coins of that denomination, it doesn't, instead. Then it will only print the valid combos.
Another, more simple way is: run the algorithm without bounds, then filter the output based on what combinations are invalid. Thinking of it this way makes it really obvious that the problem is indeed solvable.

Resources