This question already has an answer here:
Big O of append in Golang
(1 answer)
Closed 6 years ago.
Java's ArrayList add method runs in amortized constant time. Same for vector's push_back in C++.
So does append() in Go also run on amortized constant time?
see https://blog.golang.org/slices
The answer should be 'yes' as you expect
Related
This question already has answers here:
How can I find the time complexity of an algorithm?
(10 answers)
Big O, how do you calculate/approximate it?
(24 answers)
Closed 5 years ago.
Hello, I have 2 functions in the image. And the question is about getting their runtime.
The answer for EX4 function is O(n^2) and EX5 is O(n^4).
I don't get this.
Question for EX4:
we have inner loop that starts with j=0 to i. From my perspective, this is equivalent to "1+2+...+n", so is "n(n+1)/2", therefore, O(n^2) for inner loop only.
However, we also know that outer loop runs from i=0 to n, which is O(n). So the answer I thought for EX4 was actually "O(n) * O(n^2) = O(n^3)". But the real answer says O(n^2). Why is that?
Question for EX5:
Similarly, I thought it is "n*(n+1)/2 = O(n^2)" for inner loop and also "n*n=O(n^2)" for outer loop, so the whole runtime become O(n^2) * O(n^2) = O(n^4), which is same as real answer of this question. But if I justify this in this way, then my solution to EX4 doesn't make sense. Why is it O(n^4) specifically?
This question already has answers here:
Big O, how do you calculate/approximate it?
(24 answers)
Closed 6 years ago.
Find out the c and n0.
Please explain with the steps.
limit as n --> infinity of (n+1)^5 / n^5 = 1.
This is neither 0 nor infinity, so they have the same complexity. This complexity is traditionally written as O(n^5).
This does assume that each step is constant for whatever you are measuring.
This question already has answers here:
What is the difference between Θ(n) and O(n)?
(9 answers)
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 8 years ago.
When people explain Theta notation, they just start talking about a function T(n) without explaining what it is. Is it just a given function?
Why is it Theta(f(n)) instead of Theta(n)? Where did the f(n) come from? f(n) is usually a given function, then what is T(n)? Are they both? This is also not commonly explained.
All the explanations I find are all explained mathematically, but with consistent failure properly define things, and instead just start talking about things out of the blue.
Thanks,
- A confused student
This question already has answers here:
Big O of Recursive Methods
(2 answers)
Closed 8 years ago.
How to find the Big O for the following recursive function using the recursive method:
T(n)=(n-1)T(n-1)+(n-1)T(n-2)
Anyway, I tried to solve this case using the classic recursive relation methodology.
It's all about observing if a pattern exists:
Very expensive algorithm (Enemies of computer science are factorial and exponential orders of growth).
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to find a binary logarithm very fast? (O(1) at best)
how does the log function work.
How the log of a with base b is calculated.
There are many algorithms for doing this. One of my favorites (WARNING: shameless plug) is this one based on the Fibonacci numbers. The code contains a pretty elaborate comment that goes into how the math works. Given a and ab, it runs in time O(lg b) and O(1) space.
Hope this helps!