Is linear-time reduction symmetric? - algorithm

If a problem X reduces to a problem Y is the opposite reduction also possible? Say
X = Given an array tell if all elements are distinct
Y = Sort an array using comparison sort
Now, X reduces to Y in linear time i.e. if I can solve Y, I can solve X in linear time. Is the reverse always true? Can I solve Y, given I can solve X? If so, how?
By reduction I mean the following:
Problem X linear reduces to problem Y if X can be solved with:
a) Linear number of standard computational steps.
b) Constant calls to subroutine for Y.

Given the example above:
You can determine if all elements are distinct in O(N) if you back them up with a hash table. Which allows you to check existence in O(1) + the overhead of the hash function (which generally doesn't matter). IF you are doing a non-comparison based sort:
sorting algorithm list
Specialized sort that is linear:
For simplicity, assume you're sorting a list of natural numbers. The sorting method is illustrated using uncooked rods of spaghetti:
For each number x in the list, obtain a rod of length x. (One practical way of choosing the unit is to let the largest number m in your list correspond to one full rod of spaghetti. In this case, the full rod equals m spaghetti units. To get a rod of length x, simply break a rod in two so that one piece is of length x units; discard the other piece.)
Once you have all your spaghetti rods, take them loosely in your fist and lower them to the table, so that they all stand upright, resting on the table surface. Now, for each rod, lower your other hand from above until it meets with a rod--this one is clearly the longest! Remove this rod and insert it into the front of the (initially empty) output list (or equivalently, place it in the last unused slot of the output array). Repeat until all rods have been removed.
So given a very specialized case of your problem, your statement would hold. This will not hold in the general case though, which seems to be more what you are after. It is very similar to when people think they have solved TSP, but have instead created a constrained version of the general problem that is solvable using a special algorithm.

Suppose I can solve a problem A in constant time O(1) but problem B has a best case exponential time solution O(2^n). It is likely that I can come up with an insanely complex way of solving problem A in O(2^n) ("reducing" problem A to B) as well but if the answer to your question was "YES", I should then be able to make all exceedingly difficult problems solvable in O(1). Surely, that cannot be the case!

Assuming I understand what you mean by reduction, let's say that I have a problem that I can solve in O(N) using an array of key/value pairs, that being the problem of looking something up from a list. I can solve the same problem in O(1) by using a Dictionary.
Does that mean I can go back to my first technique, and use it to solve the same problem in O(1)?
I don't think so.

Related

Complexity of solving a Diophantine equation with potential solutions

Say I have an equation, given as
13x + 5y = M, where M is given each time.
Evidently this is a diophantine equation and could take a long time to solve depending on the case. However, a reading told me that if we have a set of unique integer "possible solutions" of size k for X and Y stored in a Binary search tree (meaning the correct values for X and Y are contained in there somewhere), we can compute the solution pair (x, y) to the equation in O(k) time.
Now, I'm stuck on this logic because I do not see how storing elements in this data structure helps us or prevents us from having to plug in each of the k elements for X or Y, solve for the other variable, and check if the data structure contains that variable. The only thing I can think of would be somehow keeping two pointers to move along the tree, but that doesn't seem feasible.
Could someone explain the reasoning behind this logic?
Solving linear Diophantine equations (which is what you seem to be thinking of) is trivial and requires nothing more than the extended Euclidian algorithm. On the other hand, the successful resolution of Hilbert's tenth problem implies that there is no algorithm which is able to solve arbitrary Diophantine equations.
There is a vast gap between linear and arbitrary. Perhaps you can focus your question on the type of equation you are interested in.

Come up with polynomial algorithm

Given n checks, each of arbitrary (integer) monetary value, decide if the checks can be partitioned into two parts that have the same monetary value.
I'm beyond mind blown on how to solve this. Is there an algorithm to solve this in polynomial time or is this NP-Complete?
Yes it's an NP complete problem. It's a variation of the subset sum problem.
Actually you can solve this in O(n*sum/2) using dynamic programming, first sum up all checks into a varaibel sum, then you can perform dp on the checks values (either take it or leave it or take it) and check at the end if that sum is equal to s/2.

Algorithmic Reduction (Median of medians, quicksort)

I'm trying to better understand reduction, and I'm currently looking at the two algorithms, "Median of medians" and Quicksort.
I understand that both algorithms use a similar (effectively identical) partition subroutine to help solve their problems, which ends up making them quite similar.
Select(A[1...n],k): // Pseudocode for median of medians
m = [n/5]
for i from 1 to m:
B[i] = Select(A[5i-4..5i],3)
mom = Select(B[1..m],m/2)
r = partition(A[1..n],mom) // THIS IS THE SUBROUTINE
if k < r:
return Select(A[1..r-1],k)
else if k > r:
return Select(A[r+1..n],k-r)
else
return mom
So does the term "reduction" make any sense in regards to these two algorithms? Do any of the following make sense?
Median of Medians/Quicksort can be reduced to a partition subroutine
Median of medians reduces to quicksort
Quicksort reduces to median of medians
This really depends on your definition of "reduction."
The standard type of reduction that's usually discussed is a mapping reduction (also called a many-one reduction). A mapping reduction from problem X to problem Y is the following:
Given an input IX to problem X, transform it into an input IY to problem Y. Then, run a solver for problem Y on IY and output that answer.
In a mapping reduction, you get to make exactly one call to a subroutine that solves problem Y and you have to output whatever answer you get back from that subroutine. For example, you can reduce the problem of "is this number even?" to the problem of "is this number odd?" by adding one to the number and outputting whether the resulting number is odd.
As a non-example of a mapping reduction, consider these two problems: first, the problem "is every boolean in this list true?," and second, the problem "is some boolean in this list false?" If you have a solver for the second problem, you can use it to solve the first by running the solver for the second problem and outputting the opposite result: a list of booleans has some element that's false if and only if it's not the case that every element of the list is true. However, this reduction isn't a mapping reduction because we're flipping the result produced by the subroutine.
A different type of reduction that's often used is the Turing reduction. A Turing reduction from problem X to problem Y is the following:
Build an algorithm that solves problem X assuming that there is a magic black box that always solves problem Y.
All mapping reductions are Turing reductions, but not the other way around. The above reduction from "is everything true?" to "is something false" is not a mapping reduction, but it is a Turing reduction because you can use the subroutine for "is something false?" to learn whether or not the list contains any false values, then can output the opposite.
Another major difference between mapping reductions and Turing reductions is that in a Turing reduction, you can make multiple calls to the subroutine that solves problem Y, not just one.
You can think of both quicksort and median-of-medians as algorithms that use partitioning as a subroutine. In quicksort, that subroutine does all the heavy lifting required to sort everything, and in median-of-medians it does one of the essential steps to shrink down the input. Since both algorithms make multiple calls to the subroutine, you can think of them as Turing-style reductions. Quicksort is a reduction from sorting to partitioning, while median-of-medians is a reduction from selection to partitioning.
Hope this helps!
I don't think either can be reduced to the other (in any meaningful way, anyway). You could use the median of medians to choose the pivot for a Quicksort (but nearly nobody actually does). A Quicksort still has to carry out some other steps based on the pivot element though (specifically, partitioning the data based on the pivot).
Likewise, median of medians can't be reduced to Quicksort because a Quicksort does extra work that (among other things) prevents it from meeting the complexity guarantee of the median of medians.

FInding a lower bound for a nlogn algorithm

The original problem was discussed in here: Algorithm to find special point k in O(n log n) time
Simply we have an algorithm that finds whether a set of points in the plane has a center of symmetry or not.
I wonder is there a way to prove a lower bound (nlogn) to this algorithm? I guess we need to use this algorithm to solve a simplier problem, such as sorting, element uniqueness, or set uniqueness, therefore we can conclude that if we can solve e.g. element uniqueness by using this algorithm, it can be at least nlogn.
It seems like the solution is something to do with element uniqueness, but i couldn't figure out a way to manipulate this into center of symmetry algorithm.
Check this paper
The idea is if we can reduce problem A to problem B, then B is no harder than A.
That said, if problem B has lower bound Ω(nlogn), then problem A is guaranteed the same lower bound.
In the paper, the author picked the following relatively approachable problem to be B: given two sets of n real numbers, we wish to decide whether or not they are identical.
It's obvious that this introduced problem has lower bound Ω(nlogn). Here's how the author reduced our problem at hand to the introduced problem (A, B denote the two real sets in the following context):
First observe that that your magical point k must be in the center.
build a lookup data structure indexed by vector position (O(nlog n))
calculate the centroid of the set of points (O(n))
for each point, calculate the vector position of its opposite and check for its existence in the lookup structure (O(log n) * n)
Appropriate lookup data structures can include basically anything that allows you to look something up efficiently by content, including balanced trees, oct-trees, hash tables, etc.

6SUM: Given a set S of n integers, is there a subset of S with exactly 6 elements that sum to 0? How to do better than O(n^3)?

I've thought of this simple algorithm to solve the 6SUM problem which uses O(n^3) time and space:
Generate all sets of triples and put them in a hash table where the key is the sum of the triples. Then iterate through the keys of the hash table: for each key k1, see if another key k2 exists such that k2 = S-k1
What's a more efficient algorithm? This is not a homework problem.
Your algorithm is Omega(n^6) in the worst case, it is only O(n^3) in the average case. You are ignoring the possibility of hash table collisions. You can make it O(n^3 logn) by using a balanced tree instead, though.
Also, this is in P, as there is a trivial polynomial time algorithm to check every possible combination of 6 numbers, so mention of knapsack etc is irrelevant.
Like the 3-SUM problem, I believe the r-sum problem having an algorithm which is o(n^[r/2]), (note: smallOH and [x] = greatest integer >= x, e.g. [5/2] = 3) is still open.
There is a brief mention of this in the 3-SUM page here, where there is a claim that the above bounds have been proven in restricted models of computation.
So getting better than O(n^3) (i.e. o(n^3)) might be an open problem.

Resources