How To Calculate Complexity? - algorithm

I am a beginner in algorithms and I don't know how to calculate complexity.
Example:
int x=10,y;
y = x;
What is the complexity in example above?
Thanks

That should be O(1) if you refer to the O-Notation.

In the Big O Notation this corresponds to O(1) which basically means that the run-time of the operation is constant or at least is less than a certain constant. Ergo, the run-time does not depend on the input you have. As you may infer from what I wrote, the Big O Notation only gives an upper bound of the operation. There is also other notations which give a lower-bound and so on.
An example of a case where it does depend on the input could be:
int res = 0;
int[] arr = getSomeArray();
foreach (int i in arr)
res = res + i;
Here the run-time depends on how big the array is, and if we give the length of the array the variable n then this will be O(n). Again, the Big O Notation does not specify exactly how long it will take to execute but, in this case, just says that we can multiply n by some constant and then it will be finished within n*some s.
A more detailed explanation is given here: What is a plain English explanation of "Big O" notation?

Related

Sqrt(n) time complexity

I am new to time complexities and asymptotic notation. I was looking at this video: https://www.youtube.com/watch?v=9TlHvipP5yA to figure out the time complexity of a piece of code shown below
The code concludes that the time complexity for the code below is O(Sqrt(n));
When I supply different n values, I am expecting Sqrt(n) # of outputs but this doesn't verify the O(Sqrt(n)) analysis. Can someone please explain why it is so?
For example if I have n = 10, I am expecting Sqrt(10) outputs which is ~ 3 outputs or 4 if you round up I guess. IS this illogical?
Thanks in advance.
p = 0;
for( int i = 0; p <= n; i++){
p = p + i;
System.out.println(p);
}
Big O is not used to compute the number of instructions you are expected to execute. For a given n, calculating the square root of n will not give you the exact number of times the instruction executes. Big O describes what happens with your function as input size gets very large. Analysis of your function when n is 10 or even 100 does not apply to Big O.
When we say a function's time complexity is O(sqrt(n)), we mean that the function belongs in a class of functions where the time required is proportional to the square root of the value of n, but only for very large values of n.
If you watch the video, the instructor simplifies the k(k+1) / 2 term to k^2 by taking the leading term, because the k term becomes insignificant compared to the k^2 term, when k is very large.

What is the time complexity of the following function in big-O notation?

I'm working on understanding big O and I have run into a tricky problem.
When I look at this code, I immediately think O(n) just by looking at
the for loop but the line
result = result * k
makes me think it is something different.
if power(int n, int k)
{
int result = n
for (int i = 1; i < n; i++){
result = result * k
}
}
Just looking for some clear explanation on why I may or may not be wrong
This is not a tricky problem unless you're using a language like C++ that permits operator overloading and the operator*() method of types of k or result is overloaded, or else there is a standalone (free) overloaded operator method defined with those types as its arguments. I'm assuming this is not the case, so:
Your loop is of order O(n). Within that loop, what do you do? Do you do another loop? Or call a method that does so? No. Is your system's multiply operator complexity based on the size of its operands? Perhaps technically yes, since a 32-bit multiply will be faster on a 32-bit core than a 64-bit multiply, but that's based on the types of the operands, not on the operands' values. Multiply operations are normally of order O(1).
So the overall complexity is O(n*1), or just O(n).
The only way this is anything other than O(n) is if the multiply operator is overloaded, and implemented in a naive way; e.g. if it does the integer multiplication by iterating k times and adding result to itself k times. In that case, the overall complexity would be O(n2).
But in any normal case the complexity is just O(n).
This is only a statement, this is not about group of statements that iterations not dependent on n. So we know about this
result = result * k
this is in O (1) in any case its time complexity can't change. time complexity of your code is O (n) not more than O (n)

Precise Θ notation bound for the running time as a function

I'm studying for an exam, and i've come across the following question:
Provide a precise (Θ notation) bound for the running time as a
function of n for the following function
for i = 1 to n {
j = i
while j < n {
j = j + 4
}
}
I believe the answer would be O(n^2), although I'm certainly an amateur at the subject but m reasoning is the initial loop takes O(n) and the inner loop takes O(n/4) resulting in O(n^2/4). as O(n^2) is dominating it simplifies to O(n^2).
Any clarification would be appreciated.
If you proceed using Sigma notation, and obtain T(n) equals something, then you get Big Theta.
If T(n) is less or equal, then it's Big O.
If T(n) is greater or equal, then it's Big Omega.

Big-O notation representation of a simple algorithm

How can I represent its complexity with Big-O notation? I am little bit confused since the second for loop changes according to the index of the outer loop. Is it still O(n^2)? or less complex? Thanks in advance
for (int k = 0; k<arr.length; k++){
for (m = k; m<arr.length; m++){
//do something
}
}
Your estimation comes from progression formula:
and, thus, is O(n^2). Why your case is progression? Because it's n + (n-1) + ... + 1 summation for your loops.
If you add all iterations of the second loop, you get 1+2+3+...+n, which is equal with n(n+1)/2 (n is the array length). That is n^2/2 + n/2. As you may already know, the relevant term in big-oh notation is the one qith the biggest power, and coeficients are not relevant. So, your complexity is still O(n^2).
well the runtime is cca half of the n^2 loop
but in big O notation it is still O(n^2)
because any constant time/cycle - operation is represented as O(1)
so O((n^2)/2) -> O((n^2)/c) -> O(n^2)
unofficially there are many people using O((n^2)/2) including me for own purposes (its more intuitive and comparable) ... closer to cycle/runtime
hope it helps

What is Big O of a loop?

I was reading about Big O notation. It stated,
The big O of a loop is the number of iterations of the loop into
number of statements within the loop.
Here is a code snippet,
for (int i=0 ;i<n; i++)
{
cout <<"Hello World"<<endl;
cout <<"Hello SO";
}
Now according to the definition, the Big O should be O(n*2) but it is O(n). Can anyone help me out by explaining why is that?
Thanks in adavance.
If you check the definition of the O() notation you will see that (multiplier) constants doesn't matter.
The work to be done within the loop is not 2. There are two statements, for each of them you have to do a couple of machine instructions, maybe it's 50, or 78, or whatever, but this is completely irrelevant for the asymptotic complexity calculations because they are all constants. It doesn't depend on n. It's just O(1).
O(1) = O(2) = O(c) where c is a constant.
O(n) = O(3n) = O(cn)
O(n) is used to messure the loop agains a mathematical funciton (like n^2, n^m,..).
So if you have a loop like this
for(int i = 0; i < n; i++) {
// sumfin
}
The best describing math function the loops takes is calculated with O(n) (where n is a number between 0..infinity)
If you have a loop like this
for(int i =0 ; i< n*2; i++) {
}
Means it will took O(n*2); math function = n*2
for(int i = 0; i < n; i++) {
for(int j = 0; j < n; j++) {
}
}
This loops takes O(n^2) time; math funciton = n^n
This way you can calculate how long your loop need for n 10 or 100 or 1000
This way you can build graphs for loops and such.
Big-O notation ignores constant multipliers by design (and by definition), so being O(n) and being O(2n) is exactly the same thing. We usually write O(n) because that is shorter and more familiar, but O(2n) means the same.
First, don't call it "the Big O". That is wrong and misleading. What you are really trying to find is asymptotically how many instructions will be executed as a function of n. The right way to think about O(n) is not as a function, but rather as a set of functions. More specifically:
O(n) is the set of all functions f(x) such that there exists some constant M and some number x_0 where for all x > x_0, f(x) < M x.
In other words, as n gets very large, at some point the growth of the function (for example, number of instructions) will be bounded above by a linear function with some constant coefficient.
Depending on how you count instructions that loop can execute a different number of instructions, but no matter what it will only iterate at most n times. Therefore the number of instructions is in O(n). It doesn't matter if it repeats 6n or .5n or 100000000n times, or even if it only executes a constant number of instructions! It is still in the class of functions in O(n).
To expand a bit more, the class O(n*2) = O(0.1*n) = O(n), and the class O(n) is strictly contained in the class O(n^2). As a result, that loop is also in O(2*n) (because O(2*n) = O(n)), and contained in O(n^2) (but that upper bound is not tight).
O(n) means the loops time complexity increases linearly with the number of elements.
2*n is still linear, so you say the loop is of order O(n).
However, the loop you posted is O(n) since the instructions in the loop take constant time. Two times a constant is still a constant.
The fastest growing term in your program is the loop and the rest is just the constant so we choose the fastest growing term which is the loop O(n)
In case if your program has a nested loop in it this O(n) will be ignored and your algorithm will be given O(n^2) because your nested loop has the fastest growing term.
Usually big O notation expresses the number of principal operations in a function.
In this tou're overating over n elements. So complexity is O(n).
Surely is not O(n^2), since quadratic is the complexity of those algorithms, like bubble sort which compare every element in the input with all other elements.
As you remember, bubble sort, in order to determine the right position in which to insert an element, compare every element with the others n in a list (bubbling behaviour).
At most, you can claim that you're algorithm has complexity O(2n),since it prints 2 phrases for every element in the input, but in big O notation O(n) is quiv to O(2n).

Resources