A very complex problem in reduction notion - algorithm

I have studied many about reduction but I have a bad problem in it:
I take this from CLRS :
" ... by “reducing” solving problem A to solving problem B, we use the “easiness” of B to prove the “easiness” of A."
And I take this from "Computational Complexity by Christos H. Papadimitriou " :
" ... problem A is at least as hard as problem B if B reduces to A."
I got confused with these two notion:
when we use easiness , we say that problem X reduces to problem Y and if we have polynomial time algorithm for Y and reduction process is done in polynomial time then problem X is solvable in polynomial time and X is easier than Y or at least is not harder than Y.
But when we use hardness , we say problem X reduces to problem Y and Y is easier than X or at least is not harder than X.
I really got confused, Please help me.
Special thanks.

I think you might have missed that the first quote says "reduce A to B", and the second quote says "reduce B to A".
If X reduces to Y, meaning that Y can be used to solve X, then X is no harder than Y. That's because polynomial-complexity reduction is considered "free", so by reducing X to Y we've found a way to solve X using whatever solutions there are to Y.
So, in the first quote, if A reduces to B and B is easy, that means A is easy (strictly speaking, it's no harder).
The second quote uses the logical contrapositive: if B reduces to A and B is hard, then A must be hard (strictly speaking, it's no easier). Proof: If A was easy, then B would be easy (as above but A and B are reversed). B is not easy, therefore A is not easy.
Your statement, "we say problem X reduces to problem Y and Y is easier than X or at least is not harder than X" is false. It is possible for X to reduce to Y (that is, we can use Y to solve X), even though Y in fact is harder than X. So we could reduce addition (X) to a special case of some NP-hard problem (Y), by defining a scheme to construct in polynomial time an instance of the NP-hard problem whose solution is the sum of our two input numbers. It doesn't mean addition is NP-hard, just that we've made things unnecessarily difficult for ourselves. It's unwise to use that reduction in order to perform addition, since there are better ways to do addition. Well, better assuming P!=NP, that is.

Think of reduction as reduction of the proof for the problem being in a certain class rather than reducing the problem itself. The relation is more related to logic than to complexity.

The theory is simply this.
You have an algorithm to solve problem A that you know can be solved in polynominal time.
If it is possible to convert problem B into a notation that can be solved by problem A and then convert the result back into the notation for problem B in polynominal time, then to solve problem B will also be in polynominal time - as the total time is just the addition of two polynominals - hence no harder.

Related

Numerical accuracy of an integer solution to an exponential equation

I have an algorithm that relies on integer inputs x, y and s.
For input checking and raising an exception for invalid arguments I have to make sure that there is a natural number n so that x*s^n=y
Or in words: How often do I have to chain-multiply x with s until I arrive at y.
And more importantly: Do I arrive at y exactly?
This problem can be further abstracted by dividing by x:
x*s^n=y => s^n=y/x => s^n=z
With z=y/x. z is not an integer in general, but one can only arrive at y using integer multiplication if y is divisible by x. So this property can be easily tested first and after that it is guaranteed that z is also an integer and now it is down to solving s^n=z.
There is already a question related to that.
There are lots of solutions. Some iterative and some solve the equation using a logarithm and either truncate, round or compare with an epsilon. I am particularly interested in the solutions with logarithms. The general idea is:
def check(z,s):
n = log(z)/log(s)
return n == int(n)
Equality comparing floating point numbers does seem pretty sketchy though. Under normal circumstances I would not count that as a general and exact solution to the problem. Answers that suggest this approach don't mention the precision issue and answers that use an epsilon for comparing just take a randomly small number.
I wonder how robust this method (with straight equality) really is, because it seems to work pretty well and I couldn't break it with trial and error. And if it breaks down at some point, how small or large the epsilon has to be.
So basically my question is:
Can the logarithm approach be guaranteed to be exact under specific circumstances? E.g. limited integer input range.
I thought about this for a long time now and I think, that it is possible that this solution is exact and robust at least under some circumstances. But I don't have a proof for that.
My line of thinking was:
Can I find a combination of x,y,s so that the chain-multiply just barely misses y, which means that n will be very close to an integer but not quite?
The answer is no. Because x, y and s are integers, the multiplication will also be an integer. So if the result just barely misses y, it has to miss by at least 1.
Well, that is how far I've gotten. My theory is, that choosing only integers makes the calculation very precise. I would consider it a method with good numerical stability. And also with a very specific behaviour regarding stability. So I believe it is possible, that this calculation is precise enough to truncate all decimals. It would be insane if someone could prove or disprove that.
If a guarantee for correctness can be given for a specific value range, I am interested in the general approach, but a fairly applicable range of values would be the positive part of int32 for the integers and double floating point precision.
Testing with an epsilon is also an option, but then the question is how small that epsilon has to be. This is probably related to the "miss by at least 1" logic.
You’re right to be skeptical of floating point. Math libraries typically
don’t provide correctly rounded transcendental functions (The Table
Maker’s Dilemma), so the exact test is suspect. Indeed, it’s not
difficult to find counterexamples (see the Python below).
Since the input z is an integer, however, we can do an error analysis to
determine an appropriate epsilon. Using calculus, one can prove a bound
log(z+1) − log(z) = log(1 + 1/z) ≥ 1/z − 1/(2z2).
If log(z)/log(s) is not an integer, then z must be at least one away
from a power of s, putting this bound in play. If 2 ≥ z, s <
231 (have representations as signed 32-bit integers), then
log(z)/log(s) is at least (1/231 −
1/263)/log(231) away from integer. An epsilon of
1.0e-12 is comfortably less than this, yet large enough that if we lose
a couple of ulps (1 ulp is on the order of 3.6e-15 in the worst case
here) to rounding, we don’t get a false negative, even with a rather
poor quality implementation of log.
import math
import random
while True:
x = random.randrange(2, 2**15)
if math.log(x**2) / math.log(x) != 2:
print("#", x)
break
# 19143

Why is the term 'reduce' used in the context of NP complexity?

Why is the term 'reduce' used when B is at least as hard?
In the context of NP complexity, we say that A is reducible to B in polynomial time, namely, A ≤ B where A is a known hard problem and we try to show that it can be reduced to B, a problem with unknown hardness.
Suppose we prove it successfully, that means that B is at least as hard as A. Then what exactly is reduced? It does not seem to be inline with the meaning of 'reduce' when B is a problem that is harder and less general.
Reduce comes from the latin "reducere", composed of "re" (back) and "ducere" (to lead). In this context, it literally means "bring back, convert", since the problem of deciding if an element x is in A is converted to the problem of deciding if a suitably transformed input f(x) is in B.
Let me observe that the notion of reducibility is used in may different context apart from (NP) complexity. In particular, it originated in computability theory.

Efficient Computation of The Least Fixed Point of A Polynomial

Let P(x) denote the polynomial in question. The least fixed point (LFP) of P is the lowest value of x such that x=P(x). The polynomial has real coefficients. There is no guarantee in general that an LFP will exist, although one is guaranteed to exist if the degree is odd and ≥ 3. I know of an efficient solution if the degree is 3. x=P(x) thus 0=P(x)-x. There is a closed-form cubic formula, solving for x is somewhat trivial and can be hardcoded. Degrees 2 and 1 are similarly easy. It's the more complicated cases that I'm having trouble with, since I can't seem to come up with a good algorithm for arbitrary degree.
EDIT:
I'm only considering real fixed points and taking the least among them, not necessarily the fixed point with the least absolute value.
Just solve f(x) = P(x) - x using your favorite numerical method. For example, you could iterate
x_{n + 1} = x_n - P(x_n) / (P'(x_n) - 1).
You won't find closed-form formula in general because there aren't any closed-form formula for quintic and higher polynomials. Thus, for quintic and higher degree you have to use a numerical method of some sort.
Since you want the least fixed point, you can't get away without finding all real roots of P(x) - x and selecting the smallest.
Finding all the roots of a polynomial is a tricky subject. If you have a black box routine, then by all means use it. Otherwise, consider the following trick:
Form M the companion matrix of P(x) - x
Find all eigenvalues of M
but this requires you have access to a routine for finding eigenvalues (which is another tricky problem, but there are plenty of good libraries).
Otherwise, you can implement the Jenkins-Traub algorithm, which is a highly non trivial piece of code.
I don't really recommend finding a zero (with eg. Newton's method) and deflating until you reach degree one: it is very unstable if not done properly, and you'll lose a lot of accuracy (and it is very difficult to tackle multiple roots with it). The proper way do do it is in fact the above-mentioned Jenkins-Traub algorithm.
This problem is trying to find the "least" (here I'm not sure if you mean in magnitude or actually the smallest, which could be the most negative) root of a polynomial. There is no closed form solution for polynomials of large degree, but there are myriad numerical approaches to finding roots.
As is often the case, Wikipedia is a good place to begin your search.
If you want to find the smallest root, then you can use the rule of signs to pin down the interval where it exists and then use some numerical method to find roots in that interval.

efficient algorithm for guessing

I am abstracting a real world problem into the following question:
X is a pool of all possible permutation of letters.
Y is a pool of strings.
F is a function that takes a candidate x from X and returns a boolean value depending on whether x belongs to Y.
F is expensive and X is huge.
What is the most efficient way to extract as many results from Y as possible? False positives are ok.
There is really no way to answer this question well, as most solutions to these types of problems are highly domain-specific.
You probably should try your question here: https://cstheory.stackexchange.com/
But, to give you an example of the range of possibilities you're talking about; the Traveling Salesman problem seems similar - and is often solved with a "self organizing map": http://www.youtube.com/watch?v=IA6eGYMyr1A
Of course, the "solutions" people come up with to the traveling salesman problem don't have to be the BEST solution, just a GOOD solution... so your question doesn't indicate whether or not this is applicable to your situation or not.
It sounds like you're asking for some sort of more efficient brute-forcing technique... but there just isn't any.
As another example, for cracking passwords (which seems similar to your question), people often try "commonly used words / passwords" first, before resorting to total brute force... but this is, again, a domain-specific solution.
Make F less expensive by implementing Delta based score calculation. Then use metaheuristics (or branch and bound) to find as many Y's as possible (for example use Drools Planner).
Introduce another function G which is cheap and also tests whether x belongs to y. G must return true whenever F returns true, and may return true when F returns false. First test with G, testing with F only if G returns true.
I don't see how to say anything more specific, considering the generality of your formulation.

Efficient evaluation of hypergeometric functions

Does anyone have experience with algorithms for evaluating hypergeometric functions? I would be interested in general references, but I'll describe my particular problem in case someone has dealt with it.
My specific problem is evaluating a function of the form 3F2(a, b, 1; c, d; 1) where a, b, c, and d are all positive reals and c+d > a+b+1. There are many special cases that have a closed-form formula, but as far as I know there are no such formulas in general. The power series centered at zero converges at 1, but very slowly; the ratio of consecutive coefficients goes to 1 in the limit. Maybe something like Aitken acceleration would help?
I tested Aitken acceleration and it does not seem to help for this problem (nor does Richardson extrapolation). This probably means Pade approximation doesn't work either. I might have done something wrong though, so by all means try it for yourself.
I can think of two approaches.
One is to evaluate the series at some point such as z = 0.5 where convergence is rapid to get an initial value and then step forward to z = 1 by plugging the hypergeometric differential equation into an ODE solver. I don't know how well this works in practice; it might not, due to z = 1 being a singularity (if I recall correctly).
The second is to use the definition of 3F2 in terms of the Meijer G-function. The contour integral defining the Meijer G-function can be evaluated numerically by applying Gaussian or doubly-exponential quadrature to segments of the contour. This is not terribly efficient, but it should work, and it should scale to relatively high precision.
Is it correct that you want to sum a series where you know the ratio of successive terms and it is a rational function?
I think Gosper's algorithm and the rest of the tools for proving hypergeometric identities (and finding them) do exactly this, right? (See Wilf and Zielberger's A=B book online.)

Resources