What is the growth of this program? ChatGPT is wrong, right? - algorithm

This is definitely a stupid, and simple question. But for some reason this is bugging me. This is not a homework question--insofar, as I am not solving a homework question for my own work. I am a mathematician, who's helping his friend with some of his math homework for his CS degree. And I am pretty certain I have the correct solution, but ChatGPT gives a different solution. And he wants to follow ChatGPT's solution, when I'm pretty sure it is wrong.
The pseudo code is simple
l=0
while n >= 1 do
for j=1, n, do
l = l+2
od
n = n/3
od
ChatGPT has said that this is O(n^2 log(n)), and that just doesn't make sense to me. It's explanation is also kind of nonsense. But my friend seems to be leaning toward ChatGPT, which I think is bull shit. I believe the correct bound is O(nlog(n)), and I've supplied the following reasoning.
The operation:
while n >= 1 do
n = n/3
od
grows likes O(log(n). The act of adding a for loop, as the one I used, looks like:
Sum_{j=1}^{log(n)} O(n/3^j)
This entire sum of operations, just becomes O(nlog(n))...
I'm pretty confident ChatGPT is wrong in this scenario, but I'd appreciate someone more well versed in this kind of stuff. I don't translate code to O notation at all in my work. If I'm wrong that's fine, but I'd appreciate a well thought out answer. For god's sakes, this program has me checking my sanity!

Update
I realize I may have spoken too soon to say whether O(n * log n) is the right answer. It's certainly a lot closer to a right answer than O(n² * log n).
Original answer
You are right. ChatGPT is wrong. Your logic is sound. You've got an O(n) operation nested inside an O(log n) operation, which comes out to O(n * log n).
This should not come as a surprise: ChatGPT is designed to write prose, and has a consistent track record of making a very convincing case for "facts" and solutions that are simply wrong. Your friend should get out of the habit of relying on ChatGPT for their homework answers for this reason, even if they're ignoring the ethics of the matter.

This question was answered by #moreON but I will present the answer properly, as I see it. Thank you #StriplingWarrior, you definitely quenched my fears that I was so off. I knew it was at least O(n log(n)), but the correct answer is O(n). Which you identified before me, after #moreON's comments.
The program f which takes n and gives l is given as:
l = 0
while n >=1 do
for k=1, k <= n, do
l = l+2
od
n = n/3
od
The time it takes to run this program, if we treat each single operation as O(1)--looks exactly like the equation:
sum_{i=1}^{log_3(n)} n/3^i
When we consider the growth in n. This comes about from the while loop doing about log_3(n) loops. From here, we do a while loop for at least 1 <= m = n/3. This does, for each term in the log_3(n) list, a larger division by three, n/3^i. We can write the growth in cpu time to evaluate n of f
sum_{i=1}^{log_3(n)} n/3^i = n O(1)
As a mathematician I should've seen this. But even if I did, I'd doubt that it's that easy. Or as simple as that. Just a bound on a relatively simple sum. I thought, at best we can rough bound it with O(nlog n). But, we can prove that the solution to our problem is:
f(n) = O(n)
I want to thank #moreON and #StriplingWarrior. You guys were right... I apologize for being a jack ass.

Related

How do I know that n^(0.1) = w[ (log n)^10 ] with regard to algorithm complexity

assume w = little omega
one algorithm takes n^(0.1) steps to complete a task
another algorithm takes (log n)^10 to complete the task
The question asked is whether n^(0.1) = w((log n)^10) is true. I got the answer wrong and said false. However, I'm not sure why I got it wrong if, graphically, (log n)^10 appears to dominate n^(0.1) even at n values in the trillions.
What I have tried:
I have tried taking the limit of f(x)/g(x) to see if it goes to infinity. However, after a round of L'Hopital's rule it seems like this approach would be a very involved process.
I have also tried simply graphically assessing whether n^(0.1) grows faster than (log n)^10. And perhaps this is the source of my confusion. My professor said that in the case of logs vs polynomials, that any polynomial always dominates the log. However, I have tried graphing these functions against each other and even at n = 9999999, (log n)^10 seems to clearly dominate n^(0.1).
Am I missing something here? Will n^0.1 dominate (log n)^10 at some point?
Note: I should have clarified that this is all in relation to big O notation and algorithm complexity. Where n^(0.1) and (log n)^10 represent the steps that a given algorithm takes to perform a task. I've added some description to the question which will hopefully make things clearer.
Since you didn't specify the base of the log, lets make it a constant B.
Now the easiest way is to prove what you want it to let let n = B10m
Then n0.1 / (logB n)10 = Bm / (10m)10
Now we're comparing a simple exponential to a simple polynomial, and we can prove if necessary that the exponential will dominate... though n will get quite large before it does.
For instance if we find m such that 2m = (10m)10, we get m = 100 or so, so n = 21000.

Can Someone Verify My Answers on Asymptotic Analysis?

This for a data structures and algorithms course. I am confident in all of them but part d and I am not sure how to approach e. I know for part e, it is the sum of the harmonic series, and our professor told us it is bounded by (ln(n) + 1/n, ln(n) + 1) since there is no closed form representation for the sum of the harmonic series, but I am still not sure how to realize which has the faster or slower growth rate to determine how to classify them. If someone could review my answers and help me understand part e, I would appreciate it. Thank you.
The question: https://imgur.com/a/mzi0LL9
My answers: https://imgur.com/a/yxV6pim
Any function of the form is going to dominate a series like that.
We can factor out the constant to see that a bit easier and such a general harmonic series is bounded above by log.
So obviously we can ignore the 200 in big-O. In lieu of a proof since it seems one isn't required you can think about the intuition behind it. The summation as n grows will keep adding smaller and smaller terms but is going to keep growing to the point where is massive but 1/n is practically zero.

Big Oh - O(n) vs O(n^2)

I've recently finished two tests for a data a structures class and I've got a question related to O(n) vs O(n^2) wrong twice. I was wondering if I could get help understanding the problem. The problem is:
Suppose that Algorithm A has runtime O(n^2) and Algorithm B has runtime O(n). What can we say about the runtime of these two algorithms when n=17?
a) We cannot say anything about the specific runtimes when n=17
b) Algorithm A will run much FASTER than Algorithm B
c) Algorithm A will run much SLOWER than Algorithm B
For both tests I answered C based on: https://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions. I knew B made no sense based on the link provided. Now I am starting to think that its A. I'm guessing its A because n is small. If that is the cases I am wondering when is n sufficiently larger enough that C would true.
There are actually two issues here.
The first is the one you mentioned. Orders of growth are asymptotic. They just say that there exists some n0 for which, for any n > n0, the function is bounded in some way. They say nothing about specific values of n, only "large enough" ones.
The second problem (which you did not mention), is that O is just an upper bound (as opposed to Θ), and so even for large enough n you can't compare the two. So if A = √n and B = n, then obviously B grows faster than A. However, A and B still fit the question, as √ n = O(n2) and n = O(n).
The answer is A.
Big Oh order of a function f(x) is g(x) if f(x)<=K*g(x) forall x>some real number
Big Oh of 3*n+2 and n is O(n) since 4*n is greater than both functions for all x>2 . since both the Big oh notation of the functions are same we cannot say that they run in the same time for some value.For example at n=0 the value of first function is 2 and the second one is 0
So we cannot exactly relate the running times of two functions for some value.
The answer is a): You can't really say anything for any specific number just given the big O notation.
Counter-example for c: B has a runtime of 1000*n (= O(n)), A has a runtime of n^2.
When doing algorithm analysis, specifically Big Oh, you should really only think about input sizes tending towards infinity. With such a small size (tens vs. thousands vs. millions), there is not a significant difference between the two. However, in general O(n) should run faster than O(n^2), even if it the difference is less than few milliseconds. I suspect the key word in that question is much.
My answer is based on my experience in competitive programming, which require a basic understanding of the O or called Big O.
When you talk about which one is faster and which one is slower, of course, basic calculation is done that. O(n) is faster than O(n^2), big oh is used based on worst case scenario.
Now when exactly that happen? Well, in competitive programming, we used 10^8 thumb rule. It's mean if an algorithm complexity is O(n) and then there is around n = 10^8 with time limit around 1 second, the algorithm can solve the problem.
But what if the algorithm complexity is O(n^2)? No, then, it will need around (10^8)^2 which is more than 1 second. (1-second computer can process around 10^8 operation).
So, for 1 second time, the max bound for O(n^2) is around 10^4 meanwhile for O(n) can do up to 10^8. This is where we can clearly see the different between the two complexity in 1 second time pass on a computer.

Finding the best BigO notation with two strong terms

I am asked to find the simplest exact answer and the best big-O expression for the expression:
sum n, n = j to k.
I have computed what I think the simplest exact answer is as:
-1/2(j-k-1)(j+k)
Now when I go to take the best possible big-O expression I am stuck.
From my understanding, big-O is just finding the operation time of the worst case for an algorithm by taking the term that over powers the rest. So like I know:
n^2+n+1 = O(n^2)
Because in the long run, n^2 is the only term that matters for big n.
My confusion with the original formula in question:
-1/2(j-k-1)(j+k)
is as to what the strongest term is? To try and solve again I try factoring to get:
-1/2(j^2-jk-j+jk-k^2-k)
Which still does not make itself clear to me since we now have j^2-k^2. Is the answer I am looking for O(k^2) since k is the end point of my summation?
Any help thanks.
EDIT: It is unspecified as to which variable (j or k) is larger.
If you know k > j, then you have O(k^2). Intuitively, that's because as numbers get bigger, squares get farther apart.
It's a little unclear from your question which variable is the larger of the two, but I've assumed that it's k.

n! different permutations achievable in n^100 log n

I was reading this post on SO : p-versus-np
And saw this:
"The creation of all permutations for a given range with the length n is not bounded, as you have n! different permutations. This means that the problem could be solved in n^(100) log n, which will take a very long time, but this is still considered fast."
Can someone explain how n! is solvable in n^(100) log n
I carefully read the statement that comes from a longer explanation that I googled out. I think that the correct wording would be:
"This means that a problem could be solved in n 100 log n, which would take a very long time, but this is still considered fast. On the other hand, one of the first algorithms for TSP was O(n!) and another one was O(n2 2n). And compared to polynomial functions these things grow really, really fast."
Notice the word "a" instead of "the"
The correct approximation is the Stirling's formula.
That condradicts what that guys wrote. Honestly, I have no idea what he meant by it...

Resources