Proving Queue complexity by induction - data-structures

I have two stacks that demonstrate a Queue, and the number of times we will be calling the Enqueue and Dequeue functions is n (#enqueue+#dequeue=n)
And I need to prove by induction that the complexity of the n function calling is Θ(n).
I know that I have to prove both Ω(n) and O(n). For Ω(n) I thought maybe I will get it if I'll call enqueue n times (each one is O(1) ) .
To be honest I'm new to this, and I don't have any idea how to start proving this by induction, anyone can give me a push please?
Thanks

I'm showing you the proof, But I'm not sure that it will be 100% correct.
enqueue+dequeue=n;
T(enqueue)=O(1) as inserting it on the stack will only take 1 step.
T(dequeue)=O(n)
T(enqueue+dequeue)>=max(T(enqueue),T(dequeue))
T(n)>=max(O(1),O(n))
T(n)=O(n) , to prove
By Induction
1. For n=1
T(enqueue)=O(1) & T(dequeue)=O(1)
Therfore , T(1)=O(1).
2. Let T(n) = O(n) is true
Then , to prove T(n+1)=O(n)
(By Rule : T(a+b)=T(a)+T(b))
So,
T(n+1)=T(n)+T(1)
=O(n)+O(1) (from above explanation)
=max(O(n)+O(1))
=O(n)
Hence Proved , T(n+1)=O(n) or T(n)=O(n)

Related

Run-time of these recurrence relations

How do you calculate a tight bound run time for these relations?
T(n)=T(n-3)+n^2
T(n) = 4T(n/4)+log^3(n)
For the first one I used the substitution method which gave me n^2 but wasn't right and the second one I used Masters Theorem and got nlog^4(n) which also wasn't right. A thorough explanation would be helpful. Thanks!
for the First Recurrence, we can solve it by recurrence tree method
T(n)=T(n-3)+n^2
a) here we see that the number of sub problems are n/3(every i Subtract 3 from n so in n/3 steps we will be reaching the last subproblem).
b) at each level the cost is n^2
therefore the time complexiety is roughly (n/3)*n^2= (n^3)/3 which is O(n^3)
Coming to the second recurrence relation
T(n)=4T(n/4)+log^3(n)
Here we can't apply Master's theorem because n and log^3(n) are not comparable Polynomial times
we could have applied master's theorem(Corollary for strictly logarithmic bounds) if we had something like nlog^3(n) because it is greater strictly by log times
correct me if i am wrong here

Divide and Conquer alg that runs in constant time?

Can you point out to me an example of a divide and conquer algorithm that runs in CONSTANT time! I'm in a "OMG! I can not think of any such thing" kind of situation. Point me to something please. Thanks
I know that an alg that follows the following recurence: T(n) = 2T(n/2) + n would be merge sort. We're dividing the problem into 2 subproblems - each of size n/2. Then we're taking n time to conquer everything back into one sorted array.
I also know that T(n) = T(n/2) + 1 would be binary search.
But what is T(n) = 1?
For a divide-and-conquer algorithm to run in constant time, it needs to do no more than a fixed amount of work on any input. Therefore, it can make at most a fixed number of recursive calls on any input, since if the number of calls was unbounded, the total work done would not be a constant. Moreover, it needs to do a constant amount of work across all those recursive calls.
This eliminates basically any reasonable-looking recurrence relation. Anything of the form
T(n) = aT(n / b) + O(nk)
is immediately out of the question, because the number of recursive calls would grow as a function of the input n.
You could make some highly contrived divide-and-conquer algorithms that run in constant time. For example, consider this problem:
Return the first element of the input array.
This could technically be solved with divide-and-conquer by noting that
The first element of a one-element array is equal to itself.
The first element of an n-element array is the first element of the subarray of just the first element.
The recurrence is then
T(n) = T(1) + O(1)
T(1) = 1
As you can see, this is a very odd-looking recurrence, but it does work out.
I've never heard of anything like this coming up in practice, but if I think of anything I'll try to update this answer with details. (A note: I'm not expecting to ever update this answer. ^_^)
Hope this helps!

How to solve the recursive complexity T(n) = T(n/3)+T(2n/3)+cn

When calculate the median, we know that if we break the input array into subgroups as five and solve it recursively, we will get O(n) complexity, but if we break the array into 3, it won't return the O(n) complexity.
Does any one know how to prove it?
It' gonna be nlg(n) .
Try to draw it's recursion tree : The total cost of each level is equal to n, and the depth of this tree is lg(n) .
Note : Actually it's depth is between log(n) base 3 and log(n) base 3/2, but since the order of logarithms in all bases are same, we can just consider it as lg(n).
It looks like the recurrence in the title is wrong, but I think the Master Theorem for solving recurrences will be handy. You can show that going from one denominator to another puts you into a different case which goes from O(n) to something worse.

What's the complexity of for i: for o = i+1

for i = 0 to size(arr)
for o = i + 1 to size(arr)
do stuff here
What's the worst-time complexity of this? It's not N^2, because the second one decreases by one every i loop. It's not N, it should be bigger. N-1 + N-2 + N-3 + ... + N-N+1.
It is N ^ 2, since it's the product of two linear complexities.
(There's a reason asymptotic complexity is called asymptotic and not identical...)
See Wikipedia's explanation on the simplifications made.
Think of it like you are working with a n x n matrix. You are approximately working on half of the elements in the matrix, but O(n^2/2) is the same as O(n^2).
When you want to determine the complexity class of an algorithm, all you need is to find the fastest growing term in the complexity function of the algorithm. For example, if you have complexity function f(n)=n^2-10000*n+400, to find O(f(n)), you just have to find the "strongest" term in the function. Why? Because for n big enough, only that term dictates the behavior of the entire function. Having said that, it is easy to see that both f1(n)=n^2-n-4 and f2(n)=n^2 are in O(n^2). However, they, for the same input size n, don't run for the same amount of time.
In your algorithm, if n=size(arr), the do stuff here code will run f(n)=n+(n-1)+(n-2)+...+2+1 times. It is easy to see that f(n) represents a sum of an arithmetic series, which means f(n)=n*(n+1)/2, i.e. f(n)=0.5*n^2+0.5*n. If we assume that do stuff here is O(1), then your algorithm has O(n^2) complexity.
for i = 0 to size(arr)
I assumed that the loop ends when i becomes greater than size(arr), not equal to. However, if the latter is the case, than f(n)=0.5*n^2-0.5*n, and it is still in O(n^2). Remember that O(1),O(n),0(n^2),... are complexity classes, and that complexity functions of algorithms are functions that describe, for the input size n, how many steps there is in the algorithm.
It's n*(n-1)/2 which is equal to O(n^2).

What's wrong with this inductive proof that mergesort is O(n)?

Comparison based sorting is big omega of nlog(n), so we know that mergesort can't be O(n). Nevertheless, I can't find the problem with the following proof:
Proposition P(n): For a list of length n, mergesort takes O(n) time.
P(0): merge sort on the empty list just returns the empty list.
Strong induction: Assume P(1), ..., P(n-1) and try to prove P(n). We know that at each step in a recursive mergesort, two approximately "half-lists" are mergesorted and then "zipped up". The mergesorting of each half list takes, by induction, O(n/2) time. The zipping up takes O(n) time. So the algorithm has a recurrence relation of M(n) = 2M(n/2) + O(n) which is 2O(n/2) + O(n) which is O(n).
Compare the "proof" that linear search is O(1).
Linear search on an empty array is O(1).
Linear search on a nonempty array compares the first element (O(1)) and then searches the rest of the array (O(1)). O(1) + O(1) = O(1).
The problem here is that, for the induction to work, there must be one big-O constant that works both for the hypothesis and the conclusion. That's impossible here and impossible for your proof.
The "proof" only covers a single pass, it doesn't cover the log n number of passes.
The recurrence only shows the cost of a pass as compared to the cost of the previous pass. To be correct, the recurrence relation should have the cumulative cost rather than the incremental cost.
You can see where the proof falls down by viewing the sample merge sort at http://en.wikipedia.org/wiki/Merge_sort
Here is the crux: all induction steps which refer to particular values of n must refer to a particular function T(n), not to O() notation!
O(M(n)) notation is a statement about the behavior of the whole function from problem size to performance guarantee (asymptotically, as n increases without limit). The goal of your induction is to determine a performance bound T(n), which can then be simplified (by dropping constant and lower-order factors) to O(M(n)).
In particular, one problem with your proof is that you can't get from your statement purely about O() back to a statement about T(n) for a given n. O() notation allows you to ignore a constant factor for an entire function; it doesn't allow you to ignore a constant factor over and over again while constructing the same function recursively...
You can still use O() notation to simplify your proof, by demonstrating:
T(n) = F(n) + O(something less significant than F(n))
and propagating this predicate in the usual inductive way. But you need to preserve the constant factor of F(): this constant factor has direct bearing on the solution of your divide-and-conquer recurrence!

Resources