Why n!=O(n^n) while log(n!)=Θ(log(n^n)) [closed] - algorithm

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 months ago.
Improve this question
It can be proved that n!=O(n^n) by consider that n!=n*(n-1)*...*2*1 while n^n=n*n*n*n...*n.
However, log(n!)=Θ(nlogn) and log(n^n)=nlogn=Θ(nlogn).
I guess log is an Increment function so it should not change the relationship. How does this happen?

I guess log is an Increment function
Whatever that means, it is not enough to preserve relative order of growth.
A simple example: n² grows strictly faster than n, but ln n² = 2 ln n grows at the same rate as ln n.

nn grows faster than n!, but when you apply the log to both sides, the difference stays within a constant factor.
For comparison, consider that n3 > O(n2), but log n3 = 3 log n = Θ(log n2) = Θ(2 log n)

Related

Prove or disprove either t(n) ∈ O(g(n)), or t(n) ∈ Ω(g(n)), or both [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
Improve this question
Does anyone know how to prove or disapprove that:
For any two nonnegative functions t(n) and g(n) defined on the set of nonnegative
integers, either t(n) ∈ O(g(n)), or t(n) ∈ Ω(g(n)), or both.
I've found the answer of this question on Chegg, but the answer doesn't make sense for me since it just simply approved that t(n) = g(n) when n=1. However, I think it's wrong because the assertion looks like still True since it said "both", which includes the case t(n) = g(n) .
Hope someone could tell me this assertion is true and false with proof.
It's false. For example f(n)=1 if n is a multiple of 3, and n otherwise. Let g(n)=1 if n is a multiple of 2, and n otherwise.
f is neither bound above nor below by any constant multiple of g.

The space complexity is always the lower bound of the time complexity [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
My book states that for a code with T(n) time complexity and S(n) space complexity, the following statement holds:
T(n) is omega(S(n)).
My question is: Why does this statement hold?
We are speaking of sequential algorithms.
Then space complexity S(n) means that the algorithm somehow inspects each of S(n) different memory locations at least once. In order to visit this many memory locations a sequential algorithm needs Ω(S(n)) time.

Prove n^2 + 5 log(n) = O(n^2) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I am trying to prove that n^2 + 5 log(n) = O(n^2), O representing big-O notation. I am not great with proofs and any help would be appreciated.
Informally, we take big-O to mean the fastest growing term as n grows arbitrarily large. Since n^2 grows much faster than log(n), that should be clear.
More formally, asymptotic behaviors are identical when the limit of the ratio of two functions approaches 1 as their parameter(s) approach(es) infinity, which should sound like the same thing. So, you would need to show that lim(n->inf)((n^2+5log(n))/n^2) = 1.

Big Oh Notation Confusion [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm not sure if this is a problem with my understanding but this aspect of Big Oh notation seems strange to me. Say you have two algorithms - the first preforms n^2 operations and the second performs n^2-n operations. Because of the dominance of the quadratic term, both algorithms would have complexity O(n^2), yet the second algorithm will always be better than the first. That seems weird to me, Big Oh notation makes it seem like they are same. I dunno...
Big O is not about the time it takes to execute your algorithm, it is about how well it will scale when presented with large data sets (large values of n).
When presented with a large data set, the n^2 term will quickly overshadow any linear term. So the linear term becomes insignificant.
When n grows towards infinity n^2 will be much greater then n so the -n won't have any significant difference on the outcome.

Are any of the state of the art Maximum Flow algorithms practical? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
For the maximum flow problem, there seem to be a number of very sophisticated algorithms, with at least one developed as recently as last year. Orlin's Max flows in O(mn) time or better gives an algorithm that runs in O(VE).
On the other hand, the algorithms I most commonly see implemented are (I don't claim to have done an exhaustive search; this is just from casual observation):
Edmonds-Karp, O(VE^2)
Push-relabel, O(V^2 E), or O(V^3) using FIFO vertex selection
Dinic's Algorithm, O(V^2 E)
Are the algorithms with better asymptotic running time just not practical for the problem sizes in the real world? Also, I see "Dynamic Trees" are involved in quite a few algorithms; are these ever used in practice?

Resources