Given the following code, what is the complexity of 3. and how would I represent simple algorithms with the following complexities?
O(n²+n)
O(n²+2n)
O(logn)
O(nlogn)
var collection = new[] {1,2,3};
var collection2 = new[] {1,2,3};
//1.
//On
foreach(var i in c1)
{
}
//2.
//On²
foreach(var i in c1)
{
foreach(var j in c1)
{
}
}
//3.
//O(nⁿ⁺ᵒ)?
foreach(var i in c1)
{
foreach(var j in c2)
{
}
}
3 is O(n*m), or O(n^2) if the two collections are the same size.
O(n^2+n) is pointless because n is smaller than n^2. Just write O(n^2).
Most decent comparison sort algorithms run at O(n*log(n)). If you don't know any, look on Wikipedia.
A binary search is O(log(n)).
The outer foreach is executed n = |c1| times (where |x| is the size of c1), while the inner foreach is executed m = |c2| times. That's O(n * m) times in total.
how would I represent simple algorithms with the following complexities?
O(n²+n)
This is the same as O(n^2). Something that takes O(n^2) time would be drinking a toast with every other person at a party, assuming that there's always exactly two people in a toast, and only one person does the toasting at a time.
O(n²+2n)
Same as above; the O(n^2) term dominates. Another example of an O(n^2) effort is planting trees in a square garden of length n, assuming it takes constant time to plant each tree, and that once you plant a tree other trees are excluded from its vicinity.
O(logn)
An example of this would be finding a word in a dictionary by repeatedly picking the midpoint of the region of pages you need to search next. (In other words, a binary search.)
O(nlogn)
Use the above algorithm, but now you have to find every word in the dictionary.
There is no O(n²+n) or O(n^2 + 2n). Leaving aside most of the mathematical foundations of algorithmic complexity, you at least need to know that it is "aymptotic." As N approaches infinity, the value of n^2 + n is dominated by the n ^ 2 term, so that is the asymptotic complexity of n^2 + n.
3's complexity is O(I * J), where I and J are the size of the inputs in c1 and c2.
Truth be told O(n²+n) & O(n²+2n) are the same.
Complexity of 3 is O(m*n).
There is no complexity O(n2+n) or O(n2+2n). It is just O(n2). This is because n is o(n2).
Example of O(log(n)) is binary search.
Example of O(n*log(n)) is merge sort.
Related
I have this algorithm
int f(int n){
int k=0;
While(true){
If(k == n*n) return k;
k++;
}
}
My friend says that it cost O(2^n). I don’t understand why.
The input is n , the while loop iterate n*n wich is n^2, hence the complexity is O(n^2).
This is based on your source code, not on the title.
For the title, this link my help, complexity of finding the square root,
From the answer Emil Jeřábek I quote:
The square root of an n-digit number can be computed in time O(M(n)) using e.g. Newton’s iteration, where M(n) is the time needed to multiply two n-digit integers. The current best bound on M(n) is n log n 2^{O(log∗n)}, provided by Fürer’s algorithm.
You may look at the interesting entry for sqrt on wikipedia
In my opinion the time cost is O(n^2).
This function will return the k=n^2 value after n^2 while's iterations.
I'm Manuel's friend,
what you don't consider is that input n has length of log(n)... the time complexity would be n ^ 2 if we considered the input length equal to n, but it's not.
So let consider x = log(n) (the length of the input), now we have that n = 2^(x) = 2^(logn) = n and so far all correct.
Now if we calculate the cost as a function of n we get n ^ 2, but n is equal to 2^(x) and we need to calculate the cost as a function of x (because time complexity is calculated on the length of the input, not on the value), so :
O(f) = n^2 = (2^(x))^2 = 2^(2x) = O(2^x)
calculation with excel
"In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows." (https://en.wikipedia.org/wiki/Big_O_notation)
here's another explanation where the algorithm in question is the primality test : Why naive primality test algorithm is not polynomial
I'm studying for an exam, and the professor gave us a bunch of practice problems that we don't have the answer to. This is one of them, but I've been working on it forever and don't even know if I'm headed in the right direction. I'm not even asking for an answer - just someone to point me in the right direction?
I'm supposed to develop a dynamic programming algorithm (of O(n^2)) for the following function that finds the expected number of acyclic orientations in a graph using this recurrence.
I think I'm supposed to solve the recurrence using the Master Theorem or unfold-and-sum to simplify/solve the recurrence, and then develop the algorithm from there? Any hints or clues would be much appreciated. Thanks!
As you provide both formula you needed, then you can find what you need by following steps:
Use O(n^2) to precompute nCr (as large as you need for n)
Use repeating squaring method to precompute (1+x)^n (I suppose you know the domain of x? so it is O(n lgn) for all n
Calculate A_n(X) as using those precomputed values, you have at most O(n) subproblems to calculate, each using O(n) which gives O(n^2)
EDITED:
For point #2, which is to calculate (1+x)^i(n-i) for i in [0,n], though the largest power can be up to n^2, but indeed there is only n instances, and we do not need to calculate all power up to n^2.
Let's write down the sequence of the power for different i in [0,n]: 0, n-1, 2(n-2), 3(n-3)...(n-1)(1)
And we can precompute them in a way like
function repeat_squaring(base, power){
//O(lg (power))
}
for(int i=0; i<=n; i++){
repeat_squaring(1+x, i*(n-i));
}
So now, what is the complexity to compute them in total? Just sum them up!
T = O(lg n + lg(2n) + lg(3n) + ... lg(n*n)) = O(summation(lg(i)) + nlg(n)) = O(lg(n!) + nlgn) = O(n lg n)
For the complexity of O(lg(n!)), there is two way to reason it, one is the famous Stirling's approximation, another way is this post: log(n!)
EDITED2: For shrinking n problem
I observe the pattern of the (1+x)^(i(N-i)) for N = n, n-1, n-2..etc
You can see that we can derive the term (1+x)^j for smaller A_n() from some already calculated (1+x)^(i(N-i))
We use O(n lg n) as described above to pre-calculate for the powers when N = n first, also we can use O(n lg n) to pre-calculate all (1+x)^i for i in [0..n]
Now as the pattern shown, the term (1+x)^(i(N-i)) for consecutive N values (n vs n-1, n-1 vs n-2 ...), you can indeed use O(1) to multiply / divided by some (1+x)^i where i in [0..n] (depends on your implementation, either bottom-up / top-down)
So I still think you only need O(n lgn) to pre-compute those powers, and use O(1) to transform them into other powers dynamically when you needed. (You can think as you are doing dynamic programming on both (1+x)^(i(N-i)) and A_i() at the same time)
TL;DR
Of course, if you do not want things being too complicated, you can just use O(n^2) to do a separate DP on (1+x)^(i(N-i)) for all N in [1..n]
// High Level Psuedo Code
var nCr[][];
var p[]; // (1+x)^0, (1+x)^1 ... (1+x)^n
var b[]; // (1+x)^0, (1+x)^(n-1), (1+x)^2(n-2)...
var power[n][i] // (1+x)^i(n-i), power[max_n][x] = b[] actually
var A[] // main problem!
Precompute nCr[][]; // O(n^2)
Precompute p[]; // O(n lg n)
Precompute b[]; // O(n lg n)
Precompute power[n][i] {
// O(n^2)
for all x, for all i
power[x][i] = power[x+1][i]/p[i]
}
Compute A using all those precompute arrays, O(n^2)
Pardon me if the question is "silly". I am new to algorithmic time complexity.
I understand that if I have n numbers and I want to sum them, it takes "n steps", which means the algorithm is O(n) or linear time. i.e. Number of steps taken increases linearly with number of input, n.
If I write a new algorithm that does this summing 5 times, one after another, I understand that it is O(5n) = O(n) time, still linear (according to wikipedia).
Question
If I have say 10 different O(n) time algorithms (sum, linear time sort etc). And I run them one after another on the n inputs.
Does this mean that overall this runs in O(10n) = O(n), linear time?
Yep, O(kn) for any constant k, = O(n)
If you start growing your problem and decide that your 10 linear ops are actually k linear ops based on, say k being the length of a user input array, it would then be incorrect to drop that information from the big-oh
It's best to work it through from the definition of big-O, then learn the rule of thumb once you've "proved" it correct.
If you have 10 O(n) algorithms, that means that there are 10 constants C1 to C10, such that for each algorithm Ai, the time taken to execute it is less than Ci * n for sufficiently large n.
Hence[*] the time taken to run all 10 algorithms for sufficiently large n is less than:
C1 * n + C2 * n + ... + C10 * n
= (C1 + C2 + ... + C10) * n
So the total is also O(n), with constant C1 + ... + C10.
Rule of thumb learned: the sum of a constant number of O(f(n)) functions is O(f(n)).
[*] proof of this left to the reader. Hint: there are 10 different values of "sufficient" to consider.
Yes, O(10n) = O(n). Also, O(C*n) = O(n), where C is a constant. In this case C is 10. It might seem as O(n^2) if C is equal to n, but this is not true. As C is a constant, it does not
change with n.
Also, note that in summation of complexities, the highest order or the most complex one is considered the overall complexity. In this case it is O(n) + O(n) ... + O(n) ten times. Thus, it is O(n).
I recently had an interview and was given a small problem that I was to code up.
The problem was basically find duplicates in an array of length n, using constant space in O(n). Each element is in the range 1-(n-1) and guaranteed to be a duplicate. This is what I came up with:
public int findDuplicate(int[] vals) {
int indexSum=0;
int valSum=0;
for (int i=0; i< vals.length; i++) {
indexSum += i;
valSum += vals[i];
}
return valSum - indexSum;
}
Then we got into a discussion about the runtime of this algorithm. A sum of series from 0 -> n = (n^2 + n)/2 which is quadratic. However, isn't the algorithm O(n) time? The number of operations are bound by the length of the array right?
What am I missing? Is this algorithm O(n^2)?
The fact that the sum of the integers from 0 to n is O(n^2) is irrelevant here.
Yes you run through the loop exactly O(n) times.
The big question is, what order of complexity are you assuming on addition?
If O(1) then yeah, this is linear. Most people will assume that addition is O(1).
But iwhat if addition is actually O(b) (b is bits, and in our case b = log n)? If you are going to assume this, then this algorithm is actually O(n * log n) (adding n numbers, each needs log n bits to represent).
Again, most people assume that addition is O(1).
Algorithms researchers have standardized on the unit-cost RAM model, where words are Theta(log n) bits and operations on words are Theta(1) time. An alternative model where operations on words are Theta(log n) time is not used any more because it's ridiculous to have a RAM that can't recognize palindromes in linear time.
Your algorithm clearly runs in time O(n) and extra space O(1), since convention is for the default unit of space to be the word. Your interviewer may have been worried about overflow, but your algorithm works fine if addition and subtraction are performed modulo any number M ≥ n, as would be the case for two's complement.
tl;dr Whatever your interviewer's problem was is imaginary or rooted in an improper understanding of the conventions of theoretical computer science.
You work on each on n cells one time each. Linear time.
Yes the algorithm is linear*. The result of valSum doesn't affect the running time. Take it to extreme, the function
int f(int[] vals) {
return vals.length * vals.length;
}
gives n2 in 1 multiplication. Obviously this doesn't mean f is O(n2) ;)
(*: assuming addition is O(1))
The sum of i from i=0 to n is n*(n+1)/2 which is bounded by n^2 but that has nothing to do with running time... that's just the closed form of the summation.
The running time of your algorithm is linear, O(n), where n is the number of elements in your array (assuming the addition operation is a constant time operation, O(1)).
I hope this helps.
Hristo
What is the Worst Case Time Complexity t(n) :-
I'm reading this book about algorithms and as an example
how to get the T(n) for .... like the selection Sort Algorithm
Like if I'm dealing with the selectionSort(A[0..n-1])
//sorts a given array by selection sort
//input: An array A[0..n - 1] of orderable elements.
//output: Array A[0..n-1] sorted in ascending order
let me write a pseudocode
for i <----0 to n-2 do
min<--i
for j<--i+1 to n-1 do
ifA[j]<A[min] min <--j
swap A[i] and A[min]
--------I will write it in C# too---------------
private int[] a = new int[100];
// number of elements in array
private int x;
// Selection Sort Algorithm
public void sortArray()
{
int i, j;
int min, temp;
for( i = 0; i < x-1; i++ )
{
min = i;
for( j = i+1; j < x; j++ )
{
if( a[j] < a[min] )
{
min = j;
}
}
temp = a[i];
a[i] = a[min];
a[min] = temp;
}
}
==================
Now how to get the t(n) or as its known the worst case time complexity
That would be O(n^2).
The reason is you have a single for loop nested in another for loop. The run time for the inner for loop, O(n), happens for each iteration of the outer for loop, which again is O(n). The reason each of these individually are O(n) is because they take a linear amount of time given the size of the input. The larger the input the longer it takes on a linear scale, n.
To work out the math, which in this case is trivial, just multiple the complexity of the inner loop by the complexity of the outer loop. n * n = n^2. Because remember, for each n in the outer loop, you must again do n for the inner. To clarify: n times for each n.
O(n * n).
O(n^2)
By the way, you shouldn't mix up complexity (denoted by big-O) and the T function. The T function is the number of steps the algorithm has to go through for a given input.
So, the value of T(n) is the actual number of steps, whereas O(something) denotes a complexity. By the conventional abuse of notation, T(n) = O( f(n) ) means that the function T(n) is of at most the same complexity as another function f(n), which will usually be the simplest possible function of its complexity class.
This is useful because it allows us to focus on the big picture: We can now easily compare two algorithms that may have very different-looking T(n) functions by looking at how they perform "in the long run".
#sara jons
The slide set that you've referenced - and the algorithm therein
The complexity is being measured for each primitive/atomic operation in the for loop
for(j=0 ; j<n ; j++)
{
//...
}
The slides rate this loop as 2n+2 for the following reasons:
The initial set of j=0 (+1 op)
The comparison of j < n (n ops)
The increment of j++ (n ops)
The final condition to check if j < n (+1 op)
Secondly, the comparison within the for loop
if(STudID == A[j])
return true;
This is rated as n ops. Thus the result if you add up +1 op, n ops, n ops, +1 op, n ops = 3n+2 complexity. So T(n) = 3n+2
Recognize that T(n) is not the same as O(n).
Another doctoral-comp flashback here.
First, the T function is simply the amount of time (usually in some number of steps, about which more below) an algorithm takes to perform a task. What a "step" is, is somewhat defined by the use; for example, it's conventional to count the number of comparisons in sorting algorithms, but the number of elements searched in search algorithms.
When we talk about the worst-case time of an algorithm, we usually express that with "big-O notation". Thus, for example, you hear that bubble sort takes O(n²) time. When we use big O notation, what we're really saying is that the growth of some function -- in this case T -- is no faster than the growth of some other function times a constant. That is
T(n) = O(n²)
means for any n, no matter how large, there is a constant k for which T(n) ≤ kn². A point of some confustion here is that we're using the "=" sign in an overloaded fashion: it doesn't mean the two are equal in the numerical sense, just that we are saying that T(n) is bounded by kn².
In the example in your extended question, it looks like they're counting the number of comparisons in the for loop and in the test; it would help to be able to see the context and the question they're answering. In any case, though, it shows why we like big-O notation: W(n) here is O(n). (Proof: there exists a constant k, namely 5, for which W(n) ≤ k(3n)+2. It follows by the definition of O(n).)
If you want to learn more about this, consult any good algorithms text, eg, Introduction to Algorithms, by Cormen et al.
write pseudo codes to search, insert and remove student information from the hash table. calculate the best and the worst case time complexities
3n + 2 is the correct answer as far as the loop is concerned. At each step of the loop, 3 atomic operations are done. j++ is actually two operations, not one. and j