In a program, I’m using two data structures
1: An array of pointers of size k, each pointer points to a link lists(hence, total ‘k’ lists) . Total number of nodes in all the lists = M…..(something like hashing with separate chaining, k is fixed, M can vary)
2: Another array of integers of size M (where M=number of nodes above)
Question is: What is the overall space complexity of the program? Is it something like below?
First part: O(k+M) or just O(M)….both are correct I guess!
Second part: O(2M) or just O(M)…again both correct?
Overall O(k+M) + O(2M) ==> O(max(k+M, 2M)
Or just O(M)?
Please help.
O(K+M) is O(M) if the M is always greater than K. So, the final result is O(M).
First part: O(k+M) is not correct its just O(M)
Second part: O(2M) is not correct because we don't use constants in order so correct is O(M)
Overall O(M) + O(M) ==> O(M).
Both are correct in the two cases. But since O(k+M) = O(M), supposing k constant, everybody will use the simplest notation, which is O(M).
For the second part, a single array is O(M).
For the overall, it would be O(k+M+M) = O(max(k+M,2M)) = O(M) (we can "forget" multiplicative and additive constants in the big-O notation - except if your are in constant time).
As a reminder g(x) = O(f(x)) iff there exist x0 and c such that x>x0 implies g(x) >= c.f(x)
Related
the problem is this:
given an array A of size n and algorithm B and B(A,n)=b where b is an element of A such that |{1<=i<=n | a_i>b}|>=n/10
|{1<=i<=n | a_i>b}|<=n/10
The time complexity of B is O(n).
i need to find the median in O(n).
I tried solving this question by applying B and then finding the groups of elements that are smaller than b, lets name this group as C.
and the elements bigger than b, lets name this group D.
we can get groups C and D by traversing through array A in O(n).
now i can apply algorithm B on the smaller group from the above because the median is not there and applying the same principle in the end i can get the median element. time complexity O(nlogn)
i can't seem to find a solution that works at O(n).
this is a homework question and i would appreciate any help or insight.
You are supposed to use function B() to choose a pivot element for the Quickselect algorithm: https://en.wikipedia.org/wiki/Quickselect
It looks like you are already thinking of exactly this procedure, so you already have the algorithm, and you're just calculating the complexity incorrectly.
In each iteration, you run a linear time procedure on a list that is at most 9/10ths the size of the list in the previous iteration, so the worst case complexity is
O( n + n*0.9 + n*0.9^2 + n*0.9^3 ...)
Geometric progressions like this converge to a constant multiplier:
Let T = 1 + 0.9^1 + 0.9^2 + ...
It's easy to see that
T - T*0.9 = 1, so
T*(0.1) = 1, and T=10
So the total number of elements processed through all iterations is less than 10n, and your algorithm therefore takes O(n) time.
How do I prove that this algorithm is O(loglogn)
i <-- 2
while i < n
i <-- i*i
Well, I believe we should first start with n / 2^k < 1, but that will yield O(logn). Any ideas?
I want to look at this in a simple way, what happends after one iteration, after two iterations, and after k iterations, I think this way I'll be able to understand better how to compute this correctly. What do you think about this approach? I'm new to this, so excuse me.
Let us use the name A for the presented algorithm. Let us further assume that the input variable is n.
Then, strictly speaking, A is not in the runtime complexity class O(log log n). A must be in (Omega)(n), i.e. in terms of runtime complexity, it is at least linear. Why? There is i*i, a multiplication that depends on i that depends on n. A naive multiplication approach might require quadratic runtime complexity. More sophisticated approaches will reduce the exponent, but not below linear in terms of n.
For the sake of completeness, the comparison < is also a linear operation.
For the purpose of the question, we could assume that multiplication and comparison is done in constant time. Then, we can formulate the question: How often do we have to apply the constant time operations > and * until A terminates for a given n?
Simply speaking, the multiplication reduces the effort logarithmic and the iterative application leads to a further logarithmic reduce. How can we show this? Thankfully to the simple structure of A, we can transform A to an equation that we can solve directly.
A changes i to the power of 2 and does this repeatedly. Therefore, A calculates 2^(2^k). When is 2^(2^k) = n? To solve this for k, we apply the logarithm (base 2) two times, i.e., with ignoring the bases, we get k = log log n. The < can be ignored due to the O notation.
To answer the very last part of the original question, we can also look at examples for each iteration. We can note the state of i at the end of the while loop body for each iteration of the while loop:
1: i = 4 = 2^2 = 2^(2^1)
2: i = 16 = 4*4 = (2^2)*(2^2) = 2^(2^2)
3: i = 256 = 16*16 = 4*4 = (2^2)*(2^2)*(2^2)*(2^2) = 2^(2^3)
4: i = 65536 = 256*256 = 16*16*16*16 = ... = 2^(2^4)
...
k: i = ... = 2^(2^k)
Is the time complexity of the following code O(NV^2)?
for i from 1 to N:
for j from 1 to V:
for k from 1 to A[i]://max(A) = V
z = z + k
yeah,whenever we talk about O-notation, we always think about the upper-bound(OR the worst case).
So,the complexity for this code will be equal to
O(N*V*maximum_value_of_A)
=O(N*V*V) // since,maximum value of A=V,so third loop can maximally iterate from 1 to V---V times
=O(N*V^2).
For sure it is O(NV^2) as it means the code is never slower than that. Because max(A) = V, you can say the worst case would be when at every index of A there is V. If so, then the complexity can be limited to O(NV*V).
You can calculate very roughly that the complexity of the for k loop can be O(avg(A)). This allows us to say that the whole function is Omega(NV*avg(A)), where avg(A) <= V.
Theta notation (meaning asympthotical complexity) would can be stated like Theta(NV*O(V)), O(V) representing complexity of a function which will never grow faster than V, but is not constant.
Here is an algorithm for finding kth smallest number in n element array using partition algorithm of Quicksort.
small(a,i,j,k)
{
if(i==j) return(a[i]);
else
{
m=partition(a,i,j);
if(m==k) return(a[m]);
else
{
if(m>k) small(a,i,m-1,k);
else small(a,m+1,j,k);
}
}
}
Where i,j are starting and ending indices of array(j-i=n(no of elements in array)) and k is kth smallest no to be found.
I want to know what is the best case,and average case of above algorithm and how in brief. I know we should not calculate termination condition in best case and also partition algorithm takes O(n). I do not want asymptotic notation but exact mathematical result if possible.
First of all, I'm assuming the array is sorted - something you didn't mention - because that code wouldn't otherwise work. And, well, this looks to me like a regular binary search.
Anyway...
The best case scenario is when either the array is one element long (you return immediately because i == j), or, for large values of n, if the middle position, m, is the same as k; in that case, no recursive calls are made and it returns immediately as well. That makes it O(1) in best case.
For the general case, consider that T(n) denotes the time taken to solve a problem of size n using your algorithm. We know that:
T(1) = c
T(n) = T(n/2) + c
Where c is a constant time operation (for example, the time to compare if i is the same as j, etc.). The general idea is that to solve a problem of size n, we consume some constant time c (to decide if m == k, if m > k, to calculate m, etc.), and then we consume the time taken to solve a problem of half the size.
Expanding the recurrence can help you derive a general formula, although it is pretty intuitive that this is O(log(n)):
T(n) = T(n/2) + c = T(n/4) + c + c = T(n/8) + c + c + c = ... = T(1) + c*log(n) = c*(log(n) + 1)
That should be the exact mathematical result. The algorithm runs in O(log(n)) time. An average case analysis is harder because you need to know the conditions in which the algorithm will be used. What is the typical size of the array? The typical size of k? What is the mos likely position for k in the array? If it's in the middle, for example, the average case may be O(1). It really depends on how you use this.
Question 01:
How can I find T (1) when I measure the complexity of an algorithm?
For example
I have this algorithm
Int Max1 (int *X, int N)
{
int a ;
if (N==1) return X[0] ;
a = Max1 (X, N‐1);
if (a > X[N‐1]) return a;
else return X[N‐1];
}
How can I find T(1)?
Question 2 :
T(n)= T(n-1) + 1 ==> O(n)
what is the meaning of the "1" in this equation
cordially
Max1(X,N-1) Is the actual algorithm the rest is a few checks which would be O(1)
as regardless of input the time taken will be the same.
The Max1 function I can only assume is finding array highest number in array this would be O(n) as it will increase in time in a linear fashion to the number of input n.
Also as far as I can tell 1 stands for 1 in most algorithms only letters have variable meanings, if you mean how they got
T(n-1) + 1 to O(n), it is due to the fact you ignore coefficients and lower order terms so the 1 is both cases is ignored to make O(n)
Answer 1. You are looking for a complexity. You must decide what case complexity you want: best, worst, or average. Depending on what you pick, you find T(1) in different ways:
Best: Think of the easiest input of length 1 that your algorithm could get. If you're searching for an element in a list, the best case is that the element is the first thing in the list, and you can have T(1) = 1.
Worst: Think of the hardest input of length 1 that your algorithm could get. Maybe your linear search algorithm executes 1 instruction for most inputs of length 1, but for the list [77], you take 100 steps (this example is a bit contrived, but it's entirely possible for an algorithm to take more or less steps depending on properties of the input unrelated to the input's "size"). In this case, your T(1) = 100.
Average: Think of all the inputs of length 1 that your algorithm could get. Assign probabilities to these inputs. Then, calculate the average T(1) of all possibilities to get the average-case T(1).
In your case, for inputs of length 1, you always return, so your T(n) = O(1) (the actual number depends on how you count instructions).
Answer 2. The "1" in this context indicates a precise number of instructions, in some system of instruction counting. It is distinguished from O(1) in that O(1) could mean any number (or numbers) that do not depend on (change according to, trend with, etc.) the input. Your equation says "The time it takes to evaluate the function on an input of size n is equal to the time it takes to evaluate the function on an input of size n - 1, plus exactly one additional instruction".
T(n) is what's called a "function of n," which is to say, n is a "variable" (meaning that you can substitute in different values in its place), and each particular (valid) value of n will determine the corresponding value of T(n). Thus T(1) simply means "the value of T(n) when n is 1."
So the question is, what is the running-time of the algorithm for an input value of 1?