I just have a quick question. If we have two decisions problems, say L1 and L2. If L1 and can be reduced to L2 in polynomial time, then is it true that L2 CANNOT be reduced to L1 in polynomial time?
My understanding is that this would mean:
L1 can be reduced to L2 in polynomial time => NOT (L2 can be reduced to L1 in polynomial time)
=(L1 not in P) & (L2 in P) => (L1 in P) & (L2 not in P)
=[(L1 in P) OR (L2 not in P)] OR [(L1 in P) & L2 in P)]
=(L1 in P) OR (L2 not in P)
So the statement that L1 can be reduced to L2 in polytime implies that L2 cannot be reduced to L1 in polytime is only true if L1 is in P or if L2 is not in P. As in there, if that is not the case, then the statement is false.
Does my logic make sense or am I way off? Any advice or help would be much appreciated. Thank you!
The general statement "if L1 poly-time reduces to L2, then L2 does not reduce to L1" is in general false. Any two problems in P (except for ∅ and Σ*) are poly-time reducible to one another: just solve the problem in polynomial time and output a yes or no answer as appropriate.
Your particular logic is incorrect because polynomial-time reducibility between two problems does not guarantee anything about whether the languages are in P or not. For example, the halting problem is polynomial-time reducible to the problem of whether a TM accepts a given string, but neither problem is in P because neither problem is decidable.
Hope this helps!
Related
Note: This is not asking how to solve the GCF in O(n)
You have two integers, n and i. How can we (in pseudo-code) calculate GCM(n, i) in constant time, where n and i have the domain of 0 -> infinity?
The only solutions I've seen use recursion or loops. I'd like to do it in constant time if that is possible.
Thanks.
Well, technically this is possible, yes — for example, by creating a matrix of precalculated results. But this is hardly practical due to insane memory consumption.
N.B.: This method has an important prerequisite:
n, i ∉ [0; ∞), but rather n, i ∈ [0; M], M ∈ [0; ∞) — i.e. the value range is arbitrarily large, but still fixed.
Otherwise, the very operation of reading n and i into memory would be asymptotically linear, making O(1) even theoretically impossible.
I am having problem understanding this topic of P and NP reductions. I understand that if language L1 can be reduced to Language L2 in linear time and L2 is in P, this implies that L1 is in P. But if we know L2 has a time complexity of lets say theta(n log n), can we say that L1 runs in O(n log n)? since the reduction from L1 to L2 is in linear time and L2 runs in theta(n log n) and so it will be O(n) + theta(n log n). And also lets say L2 can be also linearly reduced to L3, we can say L3 runs in omega(n log n)?
tl;dr: Yes. And yes in case you mean big Omega.
The first part is correct: If you can decide L2 in Theta((n*log(n))) which implies it can be done in O(n*log(n)) and you can reduce L1 to L2 in O(n), then you can also decide for L1 in O(n*log(n)) with exactly the argument you made. (Note: this does not mean, that you can't possibly decide L1 in less than this - there might be an algorithm to solve L1 in O(n). It's only an upper bound...)
However, the second part is not correct. If you can reduce L2 to L3, then you can say nothing about L3s running time not matter what the running time of the reduction from L2 to L3 is. (Update: this only shows that L3 might be harder, not more) L3 might be a very hard problem, like SAT for instance. It then is very likely that you can reduce L2 to it, i.e. that you can solve L2 with 'rephrasing' (a reduction) the problem + a SAT-solver - still SAT is NP-complete.
DISCLAIMER: as noted in the comments by DavidRicherby the second part of my answer is wrong as it stands - # uchman21 you were right, L3 has to be in Omega(n*log(n)) (note the upper case!):
If we know the complexity of L2 is Theata(n*log(n)) (upper and lower bounds, O(n*log(n)) and Omega(n*log(n))) and we can reduce L2 to L3 in time O(n), then L3 is at least as hard as L2 - because we know there is no algorithm which can solve the problem L2 faster than Omega(n*log(n)). However, if L3 was faster, that is in o(n*log(n)), then the algorithm 'reduction+solve_L3' runs in O(n)+o(n*log(n)) which still is in o(n*log(n)) and it solves L2 - contradiction. Hence, L3 has to be in Omega(n*log(n)).
I have read that an NP problem is verifiable in polynomial time or, equivalently, is solvable in polynomial time by a non-deterministic Turing machine. Why are these definitions equivalent?
First, let's show that anything that can be verified in polynomial time can be solved in nondeterministic polynomial time. Suppose you have some algorithm P(x, y) that decides whether y verifies x, and where P runs in time O(|x|k) for some constant k. The runtime of this algorithm is polynomial in x, meaning that it can only look at polynomially many bits of y. Then you could build this nondetermistic polynomial-time algorithm that solves the problem:
Nondeterministically guess a string y of length O(|x|k).
Deterministically run P(x, y) and output whatever it says.
This runs in nondeterministic polynomial time because y is constructed in nondeterministic polynomial time and P(x, y) then runs in polynomial time. If there is a y that verifies x, this machine can nondeterministically guess it. Otherwise, no guess works and the machine will output NO.
The other direction is trickier. Suppose there's a nondeterministic algorithm P(x) that runs in nondeterministic time O(|x|k) for some constant k. This nondeterministic algorithm, at each step, chooses from one of c different options of what to do next. Therefore, you can make a verification program Q(x, y) where y encodes what choices to make at each step of the computation of P. It can then simulate P(x), looking at the choices encoded in y to determine which step to take next. This will run in deterministic time O(|x|k) because it simply does all of the steps in the nondeterministic computation.
Hope this helps!
Perhaps this example gives some hint:
Given L = {w : expression w is satisfiable}
and Time for n variables:
Guess an assignment of the variables O(n)
Check if this is a satisfying assignment O(n)
Total time: O(n)
The satisfiability problem is an NP-Problem and Intractable but widely used in computing
applications due to the fact that it is linear time complexity for each guess.
The class NP is intended to isolate the notion of polynomial time “verifiability”.
NP is the class of languages that have polynomial time verifiers.
I was in the middle of reading Multithreaded merge sort in Introduction to algorithm 3rd edition. However I am confused with the number of processors required for the following Merge-Sort algo:
MERGE-SORT(A, p, r)
1 if p < r
2 q = (p+r)/2
3 spawn MERGE-SORT(A, p, q)
4 MERGE-SORT(A, q + 1, r)
5 sync
6 MERGE(A, p, q, r)
the MERGE is the standard merge algorithm. Now what is the number of processor required for this algorithm ?? Though i am assuming it should be O(N) but the book is claiming it to be O(log n), why? Note i am not multithreading the MERGE procedure. an explaination with an example will be really helpful. Thanks in advance.
The O(log n) value is not the number of CPUs "required" to run the algorithm, but the actual "parallelism" achieved by the algorithm. Because MERGE itself is not parallelized, you don't get the full benefit if O(n) processors even if you have them all available.
That is, the single-threaded, serial time complexity for merge sort is O(n log n). You can think of 'n' as the cost of merge and 'log n' as the factor that counts in the recursive invocations of merge sort to get the array to a stage where you can merge it. When you parallelize the recursion, but merge is still serial, you save the O(log n) factor but the O(n) factor stays there. Therefore the parallelism is of the order O(log n) when you have enough processors available, but you can't get to O(n).
In yet other words, even if you have O(n) CPUs available, most of them fall idle very soon and less and less CPUs work when the large MERGEs start to take place.
Assuming n is a positive integer, the composite function performs as follows:
(define (composite? n)
(define (iter i)
(cond ((= i n) #f)
((= (remainder n i) 0) #t)
(else (iter (+ i 1)))))
(iter 2))
It seems to me that the time complexity (with a tight bound) here is O(n) or rather big theta(n). I am just eyeballing it right now. Because we are adding 1 to the argument of iter every time we loop through, it seems to be O(n). Is there a better explanation?
The function as written is O(n). But if you change the test (= i n) to (< n (* i i)) the time complexity drops to O(sqrt(n)), which is a considerable improvement; if n is a million, the time complexity drops from a million to a thousand. That test works because if n = pq, one of p and q must be less than the square root of n while the other is greater than the square root of n; thus, if no factor is found less than the square root of n, n cannot be composite. Newacct's answer correctly suggests that the cost of the arithmetic matters if n is large, but the cost of the arithmetic is log log n, not log n as newacct suggests.
Different people will give you different answers depending on what they assume and what they factor into the problem.
It is O(n) assuming that the equality and remainder operations you do inside each loop are O(1). It is true that the processor does these in O(1), but that only works for fixed-precision numbers. Since we are talking about asymptotic complexity, and since "asymptotic", by definition, deals with what happens when things grow without bound, we need to consider numbers that are arbitrarily big. (If the numbers in your problem were bounded, then the running time of the algorithm would also be bounded, and thus the entire algorithm would be technically O(1), obviously not what you want.)
For arbitrary-precision numbers, I would say that equality and remainder in general take time proportional to the size of the number, which is log n. (Unless you can optimize that away in amortized analysis somehow) So, if we consider that, the algorithm would be O(n log n). Some might consider this to be nitpicky