n! different permutations achievable in n^100 log n - algorithm

I was reading this post on SO : p-versus-np
And saw this:
"The creation of all permutations for a given range with the length n is not bounded, as you have n! different permutations. This means that the problem could be solved in n^(100) log n, which will take a very long time, but this is still considered fast."
Can someone explain how n! is solvable in n^(100) log n

I carefully read the statement that comes from a longer explanation that I googled out. I think that the correct wording would be:
"This means that a problem could be solved in n 100 log n, which would take a very long time, but this is still considered fast. On the other hand, one of the first algorithms for TSP was O(n!) and another one was O(n2 2n). And compared to polynomial functions these things grow really, really fast."
Notice the word "a" instead of "the"

The correct approximation is the Stirling's formula.
That condradicts what that guys wrote. Honestly, I have no idea what he meant by it...

Related

What is the growth of this program? ChatGPT is wrong, right?

This is definitely a stupid, and simple question. But for some reason this is bugging me. This is not a homework question--insofar, as I am not solving a homework question for my own work. I am a mathematician, who's helping his friend with some of his math homework for his CS degree. And I am pretty certain I have the correct solution, but ChatGPT gives a different solution. And he wants to follow ChatGPT's solution, when I'm pretty sure it is wrong.
The pseudo code is simple
l=0
while n >= 1 do
for j=1, n, do
l = l+2
od
n = n/3
od
ChatGPT has said that this is O(n^2 log(n)), and that just doesn't make sense to me. It's explanation is also kind of nonsense. But my friend seems to be leaning toward ChatGPT, which I think is bull shit. I believe the correct bound is O(nlog(n)), and I've supplied the following reasoning.
The operation:
while n >= 1 do
n = n/3
od
grows likes O(log(n). The act of adding a for loop, as the one I used, looks like:
Sum_{j=1}^{log(n)} O(n/3^j)
This entire sum of operations, just becomes O(nlog(n))...
I'm pretty confident ChatGPT is wrong in this scenario, but I'd appreciate someone more well versed in this kind of stuff. I don't translate code to O notation at all in my work. If I'm wrong that's fine, but I'd appreciate a well thought out answer. For god's sakes, this program has me checking my sanity!
Update
I realize I may have spoken too soon to say whether O(n * log n) is the right answer. It's certainly a lot closer to a right answer than O(n² * log n).
Original answer
You are right. ChatGPT is wrong. Your logic is sound. You've got an O(n) operation nested inside an O(log n) operation, which comes out to O(n * log n).
This should not come as a surprise: ChatGPT is designed to write prose, and has a consistent track record of making a very convincing case for "facts" and solutions that are simply wrong. Your friend should get out of the habit of relying on ChatGPT for their homework answers for this reason, even if they're ignoring the ethics of the matter.
This question was answered by #moreON but I will present the answer properly, as I see it. Thank you #StriplingWarrior, you definitely quenched my fears that I was so off. I knew it was at least O(n log(n)), but the correct answer is O(n). Which you identified before me, after #moreON's comments.
The program f which takes n and gives l is given as:
l = 0
while n >=1 do
for k=1, k <= n, do
l = l+2
od
n = n/3
od
The time it takes to run this program, if we treat each single operation as O(1)--looks exactly like the equation:
sum_{i=1}^{log_3(n)} n/3^i
When we consider the growth in n. This comes about from the while loop doing about log_3(n) loops. From here, we do a while loop for at least 1 <= m = n/3. This does, for each term in the log_3(n) list, a larger division by three, n/3^i. We can write the growth in cpu time to evaluate n of f
sum_{i=1}^{log_3(n)} n/3^i = n O(1)
As a mathematician I should've seen this. But even if I did, I'd doubt that it's that easy. Or as simple as that. Just a bound on a relatively simple sum. I thought, at best we can rough bound it with O(nlog n). But, we can prove that the solution to our problem is:
f(n) = O(n)
I want to thank #moreON and #StriplingWarrior. You guys were right... I apologize for being a jack ass.

Why is O(2ⁿ) less complex than O(1)?

https://www.freecodecamp.org/news/big-o-notation-why-it-matters-and-why-it-doesnt-1674cfa8a23c/
Exponentials have greater complexity than polynomials as long as the coefficients are positive multiples of n
O(2ⁿ) is more complex than O(n⁹⁹), but O(2ⁿ) is actually less complex
than O(1). We generally take 2 as base for exponentials and logarithms
because things tends to be binary in Computer Science, but exponents
can be changed by changing the coefficients. If not specified, the
base for logarithms is assumed to be 2.
I thought O(1) is the simplest in complexity. Could anyone help me explain why O(2ⁿ) is less complex than O(1) ?
Errata. The author made an obvious mistake and you caught it. It's not the only mistake in the article. For example, I would expect O(n*log(n)) to be the more appropriate complexity for sorting algorithms than the one they claim (quoted below). Otherwise, you'd be able to sort a set without even seeing all of the data.
"As complexity is often related to divide and conquer algorithms, O(log(n)) is generally a good complexity you can reach for sorting algorithms."
It might be worthwhile to try to contact the author and give him a heads up so he can correct it and avoid confusing anyone else with misinformation.

Compute size N that can be solved in certain amount of time

I am working on an exercise (note no homework question) where a number of steps that can be exercised by a computer are given and one is asked to compute N in relation to certain time intervals for multiple functions some functions.
I have no problem doing this for functions such as f(n) = n, n^2, n^3 and the like.
But when it comes to f(n) = lgn, sqrt(n), n log n, 2^n, and n! i run into problems.
It is clear to me that I that I have to construct a term of the form func(n) = interval and then have to get n.
But how to do this with the functions above?
Can somebody please give me an example, or name the inverse functions so that I can look it up on wikipedia or somewhere else.
Your question isn't so much about algorithms, or complexity, but about inversions of math formulas.
It's easy to solve for n in n^k = N in a closed form. Unfortunately, for most other functions it is either not known or known that it is not possible. In particular, for n log(n), the solution involves the Lambert function, which doesn't help you much.
In most cases, you will have to solve this kind of stuff numerically.

How much time (Big-O) will an algorithm take which can rule out one third of possible numbers from 1 to N in each step?

I am abstracting the problem out. (it has nothing to do with prime numbers)
How much time (in terms of Big-O) will it take to determine if n is the solution?
If suppose I was able to design an algorithm which can rule out one third of the numbers from the possible answers {1,2,...,n} in the first step. Then successively rules out one third of the "remaining" numbers until all numbers are tested.
I have thought a lot about it but cant figure out it will be O(n log₃(n)) or O(log₃(n))
It depends on the algorithm, and on the value of N. You should be able to figure out and program an algorithm that takes O (sqrt (N)) rather easily, and it's not difficult to go down to O (sqrt (N) / log N). Anything better requires some rather deep mathematics, but there are algorithms that are a lot faster for large N.
Now when you say O (N log N), please don't guess these things. O (N log N) is ridiculous. The most stupid algorithm where you use nothing than the definition of a prime number is O (N).
Theoretically, the best effort is O(log^3 N), but the corresponding algorithm is not something you could figure out easily. See http://en.wikipedia.org/wiki/AKS_primality_test
There are more practical probabilistic algorithms though.
BTW. About 'ruling out one third' etc. It does not matter would it be 'log base 3' or 'log base 10' and so on. O(log N) roughly means 'any base logarithm' because they all can be reduced to each other by constant multiplier only. So the complexity of such algorithm will be log N * complexity_of_reduction_step. But the problem is, that 'single step' will hardly take the constant time. And if so, it will not help in achieving O(log N).

Finding the best BigO notation with two strong terms

I am asked to find the simplest exact answer and the best big-O expression for the expression:
sum n, n = j to k.
I have computed what I think the simplest exact answer is as:
-1/2(j-k-1)(j+k)
Now when I go to take the best possible big-O expression I am stuck.
From my understanding, big-O is just finding the operation time of the worst case for an algorithm by taking the term that over powers the rest. So like I know:
n^2+n+1 = O(n^2)
Because in the long run, n^2 is the only term that matters for big n.
My confusion with the original formula in question:
-1/2(j-k-1)(j+k)
is as to what the strongest term is? To try and solve again I try factoring to get:
-1/2(j^2-jk-j+jk-k^2-k)
Which still does not make itself clear to me since we now have j^2-k^2. Is the answer I am looking for O(k^2) since k is the end point of my summation?
Any help thanks.
EDIT: It is unspecified as to which variable (j or k) is larger.
If you know k > j, then you have O(k^2). Intuitively, that's because as numbers get bigger, squares get farther apart.
It's a little unclear from your question which variable is the larger of the two, but I've assumed that it's k.

Resources