I have been given a algorithm about The Levenshtein distance between two character strings π and π is defined as the minimum number of single-character insertions, deletions, or substitutions (so-called edit operations) required to transform string π into string π.
The algorithm in pseudo code is:
I have counted the number of recursive calls which is 3 and number of elements is N - 2 (since function π(π) of the total number π β π + π of input characters). So I got the recurrence as T(N) = 3T(N-2) + dN. But I'm not sure it could be constant d or proportional dN. Using the Master Theorem, I got 2^(N log(3)) with base is 4. I need some help to get the right recurrence for the worst case, and how to determine dN or d.
Thank you.``
It is constant.
That part of the formula is not dependent on π as the algorithm only needs to access π, π, π[1] and π[1] at that stage, so we can just use a constant π or even 1 for it (it doesn't matter). The other comparisons that will follow in recursion are covered by the recursive part of the relation.
A few considerations:
The recursive part of the formula is not exactly as you say. It is true that the worst case when π > 0 and π > 0 is when π[1] and π[1] are different, so we get in the min part of the formula. But there we have two recursive calls that reduce π with just 1 (not 2).
Simplifying the relation by just using π instead of π and π hides the fact that the recursion will in most cases not continue until π is 1, but will abort for some greater value of π because at that time a base case kicks in, with π=0 or π=0. If the purpose is only to find an upper bound for the complexity this is of course not a problem.
See also Calculating the complexity of Levenshtein Edit Distance.
Related
Algorithm solves a problem of size π as follows: recursively solves 3 subproblems of size π - 2, and then constructs the answer for the original problem in time π(1).
It seems obvious that the Master theorem cannot be applied here. So, I thought of drawing a recursion tree, but what bothers me is that do I need to consider two cases: when n is odd/even? Or the result sum and hence the running time wouldn't depend on this at all? Thanks in advance.
We'll have to assume that the base case of the recursion occurs when π is either 0 or 1. It could also be that the base case occurs when π is either 1 or 2, π is not allowed to be 0. We don't really know, but it is not that relevant.
In the first case, the number of operations for a given π that is odd, is the same as the number of operations for πβ1. In the second case it would be the same number of operations as for π+1.
So, for determining the asymptotic complexity we can look at just the even numbers (or only the odd numbers).
The recursion tree is a perfect 3-ary tree with a height of (π+1)/2 or (π+2)/2 (again: depending on the base case).
In a perfect 3-ary tree of height h, we have (3h-1)/2 nodes, so that is O(3h), which in terms of π is O(3π/2) = O((β3)π).
Another (possibly simpler) solution in addition to the one proposed by #trincot :
T(n)=3T(n-2)+O(1)=...=3k(T(n-2k)+O(1)).
Let's find out the stopping condition for this recurrence relation, i.e. need to find k for which the following holds: n-2k=0 --> k=n/2 --> T(n)=O(3n/2)=O(β3n)
I am comparing two algorithms that determine whether a number is prime. I am looking at the upper bound for time complexity, but I can't understand the time complexity difference between the two, even though in practice one algorithm is faster than the other.
This pseudocode runs in exponential time, O(2^n):
Prime(n):
for i in range(2, n-1)
if n % i == 0
return False
return True
This pseudocode runs in half the time as the previous example, but I'm struggling to understand if the time complexity is still O(2^n) or not:
Prime(n):
for i in range(2, (n/2+1))
if n % i == 0
return False
return True
As a simple intuition of what big-O (big-O) and big-Ξ (big-Theta) are about, they are about how changes the number of operations you need to do when you significantly increase the size of the problem (for example by a factor of 2).
The linear time complexity means that you increase the size by a factor of 2, the number of steps you need to perform also increases by about 2 times. This is what called Ξ(n) and often interchangeably but not accurate O(n) (the difference between O and Ξ is that O provides only an upper bound but Ξ guarantees both upper and lower bounds).
The logarithmic time complexity (Ξ(log(N))) means that when increase the size by a factor of 2, the number of steps you need to perform increases by some fixed amount of operations. For example, using binary search you can find given element in twice as long list using just one ore loop iterations.
Similarly the exponential time complexity (Ξ(a^N) for some constant a > 1) means that if you increase that size of the problem just by 1, you need a times more operations. (Note that there is a subtle difference between Ξ(2^N) and 2^Ξ(N) and actually the second one is more generic, both lie inside the exponential time but neither of two covers it all, see wiki for some more details)
Note that those definition significantly depend on how you define "the size of the task"
As #DavidEisenstat correctly pointed out there are two possible context in which your algorithm can be seen:
Some fixed width numbers (for example 32-bit numbers). In such a context an obvious measure of the complexity of the prime-testing algorithm is the value being tested itself. In such case your algorithm is linear.
In practice there are many contexts where prime testing algorithm should work for really big numbers. For example many crypto-algorithms used today (such as DiffieβHellman key exchange or RSA) rely on very big prime numbers like 512-bits, 1024-bits and so on. Also in those context the security is measured in the number of those bits rather than particular prime value. So in such contexts a natural way to measure the size of the task is the number of bits. And now the question arises: how many operations do we need to perform to check a value of known size in bits using your algorithm? Obviously if the value N has m bits it is about N β 2^m. So your algorithm from linear Ξ(N) converts into exponential 2^Ξ(m). In other words to solve the problem for a value just 1 bit longer, you need to do about 2 times more work.
Exponential versus linear is a question of how the input is represented and the machine model. If the input is represented in unary (e.g., 7 is sent as 1111111) and the machine can do constant time division on numbers, then yes, the algorithm is linear time. A binary representation of n, however, uses about lg n bits, and the quantity n has an exponential relationship to lg n (n = 2^(lg n)).
Given that the number of loop iterations is within a constant factor for both solutions, they are in the same big O class, Theta(n). This is exponential if the input has lg n bits, and linear if it has n.
i hope this will explain you why they are in fact linear.
suppose you call function and see how many time they r executed
Prime(n): # 1 time
for i in range(2, n-1) #n-1-1 times
if n % i == 0 # 1 time
return False # 1 time
return True # 1 time
# overall -> n
Prime(n): # Time
for i in range(2, (n/2+1)) # n//(2+1) -1-1 time
if n % i == 0 # 1 time
return False # 1 time
return True # 1 time
# overall -> n/2 times -> n times
this show that prime is linear function
O(n^2) might be because of code block where this function is called.
Consider this example of binary search tree.
n =10 ;and if base = 2 then
log n = log2(10) = 3.321928.
I am assuming it means 3.321 steps(accesses) will be required at max to search for an element. Also I assume that BST is balanced binary tree.
Now to access node with value 25. I have to go following nodes:
50
40
30
25
So I had to access 4 nodes. And 3.321 is nearly equal to 4.
Is this understanding is right or erroneous?
I'd call your understanding not quite correct.
The big-O notation does not say anything about an exact amount of steps done. A notation O(log n) means that something is approximately proportional to log n, but not neccesarily equal.
If you say that the number of steps to search for a value in a BST is O(log n), this means that it is approximately C*log n for some constant C not depending on n, but it says nothing about the value of C. So for n=10 this never says that the number of steps is 4 or whatever. It can be 1, or it can be 1000000, depending on what C is.
What does this notation say is that if you consider two examples with different and big enough sizes, say n1 and n2, then ratio of the number of steps in these two examples will be approximately log(n1)/log(n2).
So if for n=10 it took you, say, 4 steps, then for n=100 it should take approximately two times more, that is, 8 steps, because log(100)/log(10)=2, and for n=10000 it should take 16 steps.
And if for n=10 it took you 1000000 steps, then for n=100 it should take 2000000, and for n=10000 β 4000000.
This is all for "large enough" n β for small ns the number of steps can deviate from this proportionality. For most practical algorithms the "large enough" usually starts from 5-10, if not even 1, but from a strict point of view the big-O notation does not set any requirement on when the proportionality should start.
Also in fact O(log n) notation does not require that number of steps growths proportionally to log n, but requires that the number of steps growths no faster than proportionally to log n, that is the ratio of the numbers of steps should not be log(n1)/log(n2), but <=log(n1)/log(n2).
Note also another situation that can make the background for O-notation more clear. Consider not the number of steps, but the time spent for search in a BST. You clearly can not predict this time because it depends on the machine you are running on, on a particular implementation of the algorithm, after all on the units you use for time (seconds or nanoseconds, etc.). So the time can be 0.0001 or 100000 or whatever. However, all these effects (speed of your machine, etc) routhly changes all the measurement results by some constant factor. Therefore you can say that the time is O(log n), just in different cases the C constant will be different.
Your thinking is not correct totally. The steps/accesses which are considered are for comparisons. But, O(log n) is just a mere parameter to measure asymptotic complexity, and not the exact steps calculation. As exactly answered by Petr, you should go through the points mentioned in his answer.
Also, BST is binary search tree,also sometimes called ordered or sorted binary trees.
Exact running time/comparisons can't be derived from Asymptotic complexity measurement. For that, you'll have to return to the exact derivation of searching an element in a BST.
Assume that we have a βbalancedβ tree with n nodes. If the maximum number of comparisons to find an entry is (k+1), where k is the height, we have
2^(k+1) - 1 = n
from which we obtain
k = log2(n+1) β 1 = O(log2n).
As you can see, there are other constant factors which are removed while measuring asymptotic complexity in worst case analysis. So, the complexity of the comparisons gets reduced to O(log2n).
Next, demonstration of how element is searched in a BST based on how comparison is done :-
1. Selecting 50,if root element //compare and move below for left-child or right-child
2. Movement downwards from 50 to 40, the leftmost child
3. Movement downwards from 40 to 30, the leftmost child
4. Movement downwards from 30 to 25, found and hence no movement further.
// Had there been more elements to traverse deeper, it would have been counted the 5th step.
Hence, it searched the item 25 after 3 iterative down-traversal. So, there is 4 comparisons and 3 downward-traversals(because height is 3).
Usually you say something like this :-
Given a balanced binary search tree with n elements, you need O(log n) operations to search.
or
Search in a balanced binary search tree of n elements is in O(log n).
I like the second phrase more, because it emphasizes that O is a function returning a set of functions given x (short: O(x)). x: N β N is a function. The input of x is the size of the input of a function and the output of x can be interpreted as the number of operations you need.
An function g is in O(x) when g is lower than x multiplied by an arbitrary non-negative constant from some starting point n_0 for all following n.
In computer science, g is often set equal with an algorithm which is wrong. It might be the number of operations of an algorithm, given the input size. Note that this is something different.
More formally:
So, regarding your question: You have to define what n is (conceptually, not as a number). In your example, it is eventually the number of nodes or the number of nodes on the longest path to a leaf.
Usually, when you use Big-O notation, you are not interested for a "average" case (and especially not for some given case) but you want to say something about the worst case.
I know that Binary Search has time complexity of O(logn) to search for an element in a sorted array. But let's say if instead of selecting the middle element, we select a random element, how would it impact the time complexity. Will it still be O(logn) or will it be something else?
For example :
A traditional binary search in an array of size 18 , will go down like 18 -> 9 -> 4 ...
My modified binary search pings a random element and decides to remove the right part or left part based on the value.
My attempt:
let C(N) be the average number of comparisons required by a search among N elements. For simplicity, we assume that the algorithm only terminates when there is a single element left (no early termination on strict equality with the key).
As the pivot value is chosen at random, the probabilities of the remaining sizes are uniform and we can write the recurrence
C(N) = 1 + 1/N.Sum(1<=i<=N:C(i))
Then
N.C(N) - (N-1).C(N-1) = 1 + C(N)
and
C(N) - C(N-1) = 1 / (N-1)
The solution of this recurrence is the Harmonic series, hence the behavior is indeed logarithmic.
C(N) ~ Ln(N-1) + Gamma
Note that this is the natural logarithm, which is better than the base 2 logarithm by a factor 1.44 !
My bet is that adding the early termination test would further improve the log basis (and keep the log behavior), but at the same time double the number of comparisons, so that globally it would be worse in terms of comparisons.
Let us assume we have a tree of size 18. The number I am looking for is in the 1st spot. In the worst case, I always randomly pick the highest number, (18->17->16...). Effectively only eliminating one element in every iteration. So it become a linear search: O(n) time
The recursion in the answer of #Yves Daoust relies on the assumption that the target element is located either at the beginning or the end of the array. In general, where the element lies in the array changes after each recursive call making it difficult to write and solve the recursion. Here is another solution that proves O(log n) bound on the expected number of recursive calls.
Let T be the (random) number of elements checked by the randomized version of binary search. We can write T=sum I{element i is checked} where we sum over i from 1 to n and I{element i is checked} is an indicator variable. Our goal is to asymptotically bound E[T]=sum Pr{element i is checked}. For the algorithm to check element i it must be the case that this element is selected uniformly at random from the array of size at least |j-i|+1 where j is the index of the element that we are searching for. This is because arrays of smaller size simply won't contain the element under index i while the element under index j is always contained in the array during each recursive call. Thus, the probability that the algorithm checks the element at index i is at most 1/(|j-i|+1). In fact, with a bit more effort one can show that this probability is exactly equal to 1/(|j-i|+1). Thus, we have
E[T]=sum Pr{element i is checked} <= sum_i 1/(|j-i|+1)=O(log n),
where the last equation follows from the summation of harmonic series.
Generic form: T(n) = aT(n/b) + f(n)
So i must compare n^logb(a) with f(n)
if n^logba > f(n) is case 1 and T(n)=Ξ(n^logb(a))
if n^logba < f(n) is case 2 and T(n)=Ξ((n^logb(a))(logb(a)))
Is that correct? Or I misunderstood something?
And what about case 3? When its apply?
Master Theorem for Solving Recurrences
Recurrences occur in a divide and conquer strategy of solving complex problems.
What does it solve?
It solves recurrences of the form T(n) = aT(n/b) + f(n).
a should be greater than or equal to 1. This means that the problem is at least reduced to a smaller sub problem once. At least one recursion is needed.
b should be greater than 1. Which means at every recursion, the size of the problem is reduced to a smaller size. If b is not greater than 1, that means our sub problems are not of smaller size.
f(n) must be positive for relatively larger values of n.
Consider the below image:
Let's say we have a problem of size n to be solved. At each step, the problem can be divided into a sub problems and each sub problem is of smaller size, where the size is reduced by a factor of b.
The above statement in simple words means that a problem of size n can be divided into a sub problems of relatively smaller sizes n/b.
Also, the above diagram shows that at the end when we have divided the problems multiple times, each sub problem would be so small that it can be solved in constant time.
For the below derivation consider log to the base b.
Let us say that H is the height of the tree, then H = logn. The number of leaves = a^logn.
Total work done at Level 1 : f(n)
Total work done at Level 2 : a * f(n/b)
Total work done at Level 1 : a * a * f(n/b2)
Total work done at last Level : number of leaves * ΞΈ(1). This is equal to n^loga
The three cases of the Master Theorem
Case 1:
Now let us assume that the cost of operation is increasing by a significant factor at each level and by the time we reach the leaf level the value of f(n) becomes polynomially smaller than the value n^loga. Then the overall running time will be heavily dominated by the cost of the last level. Hence T(n) = ΞΈ(n^loga).
Case 2:
Let us assume that the cost of the operation on each level is roughly equal. In that case f(n) is roughly equal to n^loga. Hence, the total running time would be f(n) times the total number of levels.
T(n) = ΞΈ(n^loga * logn) where k can be >=0. Where logn would be the height of a tree for k >= 0.
Note: Here k+1 is the base of log in logn
Case 3:
Let us assume that the cost of the operation on each level is decreasing by a significant factor at each level and by the time we reach the leaf level the value of f(n) becomes polynomially larger than the value n^loga. Then the overall running time will be heavily dominated by the cost of the first level. Hence T(n) = ΞΈ(f(n)).
If you are interested in more detailed reading and examples to practice, visit my blog entry Master Method to Solve Recurrences
I think you have misunderstood it.
if n^logba > f(n) is case 1 and T(n)=Ξ(n^logb(a))
Here you should not be worried about f(n) as a result, what you are getting is T(n)=Ξ(n^logb(a)).
f(n) is part of T(n) ..and if you get the result T(n) then that value will be inclusive of f(n).
so, There is no need to consider that part.
Let me know if you are not clear.