What is meant by code or algorithm complexity? How can we calculate it? - code-complexity

What is meant by code or algorithm complexity? How can we calculate it? What is the meaning of representation of complexity? I'm so confused by this term!

This gets asked a lot, try the following
Plain English explanation of Big O
Big O, how do you calculate/approximate it?
Big-O for Eight Year Olds?

Related

what is the difference between nlogn and logn in the view of how the algorithm grows

I have a question reagarding to my lecture datastructures and algortihms.
I have to problem to understand how a algorithm grows. I dont unterstand the difference between the O Notations. And i dont understand the difference between them for example O(lgn)and O(nlgn).
I hope anyone can help me. thank you
To compare time complexities you should be able to make some mathematical proofs. In your example:
for every n>1 we have by multiplying with logn: nlogn>logn so nlogn is worse than logn. An easy way to understand this is by comparing the graphs of functions as suggested in comments or even try some big inputs to see the asymptotic behavior. For example for n=1000000 :
logn(1000000)=6 and 1000000log(1000000)=6000000 which is greater.
Also notice that you dount count constants in big O notation for example 4n is O(n) , n is O(n) and also cn+w is O(n) for c,w constants.

Asymptotic time complexity exponential function

Dear friends here I am getting some confusion regarding time complexity of my algorithm. My algorithm has time complexity 3^(.5n). Is it correct way to write 3^(.5n) as (3^.5)^n. In one thesis I got it.
Yes, it is correct way. It is known identity for exponentiation
(a^b)^c = a^(b*c)
But what is relation of math formula to programming?

Why Algorithms with logarithmic time are considered to be fast?

My knowledge of big-O notation is limited , but I read in some posts that algorithms with logarithmic time are fast , can someone explain it please .
For large N, the percentage of saved operations compared to linear equivalents grows rapidly. As the gradient of a log(N) is proportional to 1/N, for large N O(NlogN) behaves more like O(N) than O(N^2) - for this reason they are often called "linearithmic" or "quasi-linear".
To add-on to what willyonka said, here is a graph:
See What is a plain English explanation of "Big O" notation?

Big O notation Algorithm

I am learning Algorithm recently and know that there are usually some good algorithm existed already that we don't need to write our own. I think the problem I am facing in a question paper.
I have a question in my past paper that if a function is O(n) then can it be O(n^2) ?
can we say that if a function is O(n) then it's also O(n^2)???
Big O is an upper bound. So, yes, n is in O(n^2), but not vice-versa. Also, both n and n^2 are in O(n^3).

Calculating a Theta (Tight Bound) Estimation

I was just wondering if someone could explain this a little to me. I want to know how to calculate an estimate for a Theta function using the integrals or the truncate-and-bound method.
For Example how would you calculate:
∑i=1..n i⋅sqrt(i)+1
Have you looked at:
Can someone explain how Big-Oh works with Summations?
What is the difference between Θ(n) and O(n)?
Big O Notation Homework--Code Fragment Algorithm Analysis?

Resources