Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I'm trying to see if I'm correct in my work for finding the big-O notation of some formulas. Each of these formulas are the number of operations in some algorithm. These are the formulas:
Forumulas
a.) n2 + 5n
b.) 3n2 + 5n
c.) (n + 7)(n - 2)
d.) 100n + 5
e.) 5n +3n2
f.) The number of digits in 2n
g.)The number of times that n can be divided by 10 before dropping below 1.0
My answers:
a.) O(n2)
b.) O(n2)
c.) O(n2)
d.) O(n)
e.) O(n2)
f.) O(n)
g.) O(n)
Am I correct on my analysis?
Let's go through this one at a time.
a.) n2 + 5. Your answer: O(n2)
Yep! You're ignoring lower-order terms correctly.
b.) 3n2 + 5n. Your answer: O(n2).
Yep! Big-O eats constant factors for lunch.
c.) (n + 7)(n - 2). Your answer: O(n2).
Yep! You could expand this out into n2 + 5n - 14 and from there drop the low-order terms to get O(n2), or you could realize that n + 7 = O(n) and n - 2 = O(n) to see that this is the product of two terms that are each O(n).
d.) 100n + 5. Your answer: O(n).
Yep! Again, dropping constants and lower-order terms.
e.) 5n + 3n2. Your answer: O(n2).
Yep! Order is irrelevant; 5n is still a low-order term.
f.) The number of digits in 2n. Your answer: O(n).
This one is technically correct but is not a good bound. Remember that big-O notation gives an upper bound and you are correct that the number n has O(n) digits, but only in the sense that the number of digits of n is asymptotically less than n. To see why this bound isn't very good, let's look at the numbers 10, 100, 1000, 10000, and 100000. These numbers have 2, 3, 4, 5, and 6 digits, respectively. In other words, growing by a factor of ten only grows the number of digits by one. If the O(n) bound you had were tight, then you'd expect that the number of digits would grow by a factor of ten every time you made the number ten times bigger, which isn't accurate.
As a hint for this one, if a number has d digits, then it's between 10d and 10d+1 - 1. That means the numeric value of a d-digit number is exponential as a function of d. So, if you start with a number of digits, the numeric value is exponentially larger. Try running this backwards. If you have a numeric value that you know is exponentially larger than the number of digits, what does that mean about the number of digits as a function of the numeric value?
f.) The number of times that n can be divided by 10 before dropping below 1.0. Your answer: O(n)
This one is also technically correct but not a very good bound. Let's take the number 100,000, for example. You can divide this by 10 seven times before you drop below 1.0, but giving a bound of O(n) means that you're saying the answers grows linearly as a function of n, so doubling n should double the number of times you can divide by ten... but is that actually the case?
As a hint, the number of times you can divide a number by ten before it drops below 1.0 is closely related to the number of digits in that number. If you can figure out this problem, you'll figure out part (e), and vice-versa.
Good luck!
Related
This is a question from Algorithms 4th Edition about MergeSort. Since we proved earlier that the number of compares, C(N) = N * log(N) for N being a power of 2. I answered this question by finding the derivative of N * log(N) and showing that this derivative is increasing for N > 0. Is this the correct approach or is there an alternate way to solve this question?
No, that approach doesn't work. If you've only proved a rule for powers of two, you have no basis to assume that the same rule applies for all the numbers in between -- especially not the numbers between integers, which N cannot even assume. (I'm guessing that you used a continuous derivative, which of course only applies to continuous functions)
If you want to prove that the maximum or minimum number of comparisons monotonically increases for all integers > 0, you can use induction.
First, demonstrate that it is true for all N from 20 and 21, and then show that if it's true for all N = 2x through 2x+1, then it's true for 2x+1 through 2x+2.
Say if we have an algorithm needs to list out all possibilities of choosing k elements from n elements (k<=n), is the time complexity of the particular algorithm exponential and why?
No.
There are n choose k = n!/(k!(n-k)!) possibilities [1].
Consider that, n choose k = n^k / (k!). [2].
Assuming you are keeping k constant, as n grows, the amount of possibilities increases in polynomial time.
For this example, ignore the (1/(k!)) term because it is constant. If k = 2, and you increase n from 2 to 3, then you have a 2^2 to 3^2 change. An exponential change would be from 2^2 to 2^3. This is not the same.
Keeping k constant and changing n results in a big O of O(n^k) (the 1/(k!) term is constant and you ignore it).
Thinking carefully about the size of the input instance is required since the input instance contains numbers - a basic familiarity with weak NP-hardness can also be helpful.
Assume that we fix k=1 and encode n in binary. Since the algorithm must visit n choose 1 = n numbers, it takes at least n steps. Since the magnitude of the number n may be exponential in the size of the input (the number of bits used to encode n), the algorithm in the worst case consumes exponential time.
You can get a feel for this exponential-time behavior by writing a simple C program that prints all the numbers from 1 to n with n = 2^64 and see how far you get in a minute. While the input is only 64 bits long, it would take you about 600 years to print all the numbers assuming that your device can print a million numbers per second.
An algorithm that finds all possibilities of choosing k elements from n unique elements (k<=n), does NOT have an exponential time complexity, O(K^n), because it instead has a factorial time complexity, O(n!). The relevant formula is:
p = n!/(k!(n-k)!)
This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 5 years ago.
I read in a book that the following expression O(2^n + n^100) will be reduced to: O(2^n) when we drop the insignificant parts. I am confused because as per my understanding if the value of n is 3 then the part n^100 seems to have a higher count of executions. What am I missing?
Big O notation is asymptotic in nature, that means we consider the expression as n tends to infinity.
You are right that for n = 3, n^100 is greater than 2^n but once n > 1000, 2^n is always greater than n^100 so we can disregard n^100 in O(2^n + n^100) for n much greater than 1000.
For a formal mathematical description of Big O notation the wikipedia article does a good job
For a less mathematical description this answer also does a good job:
What is a plain English explanation of "Big O" notation?
The big O notation is used to describe asymptotic complexity. The word asymptotic plays a significant role. Asymptotic basically means that your n is not gonna be 3 or some other integer. You should think of n being infinitely large.
Even though n^100 grows faster in the beginning, there will be a point where 2^n will outgrow n^100.
You are missing the fact that O(n) is the asymptotic complexity. Speaking more strictly, you could calculate lim(2^n / n^100) when n -> infinity and you will see it equals to infinity, so it means that asymptotically 2^n grows faster than n^100.
When complexity is measured with n you should consider all possible values of n and not just 1 example. so in most cases, n is bigger than 100. this is why n^100 is insignificant.
I am currently learning about big O notation but there is a concept that's confusing me. If for 8N^2 + 4N + 3 the complexity class would be N^2 because this is the fastest growing term. And for 5N the complexity class is N.
Then is it correct to say that of NLogN the complexity class is N since N grows faster than LogN?
The problem I'm trying to solve is that if configuration A consists of a fast algorithm that takes 5NLogN operations to sort a list on a computer that runs 10^6 operations per seconds and configuration B consists of a slow algorithm that takes N**2 operations to sort a list and is run on a computer that runs 10^9 operations per second. for smaller arrays
configuration 1 is faster, but for larger arrays configuration 2 is better. For what size of array does this transition occur?
What I thought was if I equated expressions for the time it took to solve the problem then I could get an N for the transition point however that yielded the equation N^2/10^9 = 5NLogN/10^6 which simplifies to N/5000 = LogN which is not solvable.
Thank you
In mathematics, the definition of f = O(g) for two real-valued functions defined on the reals, is that f(n)/g(n) is bounded when n approaches infinity. In other words, there exists a constant A, such that for all n, f(n)/g(n) < A.
In your first example, (8n^2 + 4n + 3)/n^2 = 8 + 4/n + 3/n^2 which is bounded when n approaches infinity (by 15, for example), so 8n^2 + 4n + 3 is O(n^2). On the other hand, nlog(n)/n = log(n) which approaches infinity when n approaches infinity, so nlog(n) is not O(n). It is however O(n^2), because nlog(n)/n^2 = log(n)/n which is bounded (it approches zero near infinity).
As to your actual problem, remember that if you can't solve an equation symbolically you can always resolve it numerically. The existence of solutions is clear.
Let's suppose that the base of your logarithm is b, so we are to compare
5N * log(b, N)
with
N^2
5N * log(b, N) = log(b, N^(5N))
N^2 = N^2 * log(b, b) = log(b, b^(N^2))
So we compare
N ^ (5N) with b^(N^2)
Let's compare them and analyze the relative value of (N^5N) / (b^(N^2)) compared to 1. You will observe that after a sertain limit it is smaller than 1.
Q: is it correct to say that of NLogN the complexity class is N?
A: No, here is why we can ignore smaller terms:
Consider N^2 + 1000000 N
For small values of N, the second term is the only one which matters, but as N grows, that does not matter. Consider the ratio 1000000N / N^2, which shows the relative size of the two terms. Reduce to 10000000/N, which approaches zero as N approaches infinity. Therefore the second term has less and less importance as N grows, literally approaching zero.
It is not just "smaller," it is irrelevant for sufficiently large N.
That is not true for multiplicands. n log n is always significantly bigger than n, by a margin that continues to increase.
Then is it correct to say that of NLogN the complexity class is N
since N grows faster than LogN?
Nop, because N and log(N) are multiplied and log(N) isn't constant.
N/5000 = LogN
Roughly 55.000
Then is it correct to say that of NLogN the complexity class is N
since N grows faster than LogN?
No, when you omit you should omit a TERM. When you have NLgN it is, as a whole, called a term. As of what you're suggesting then: NNN = (N^2)*N. And since N^2 has bigger growth rate we omit N. Which is completely WRONG. The order is N^3 not N^2. And NLgN works in the same manner. You only omit when the term is added/subtracted.
For example, NLgN + N = NLgN because it has faster growth than N.
The problem I'm trying to solve is that if configuration A consists of
a fast algorithm that takes 5NLogN operations to sort a list on a
computer that runs 10^6 operations per seconds and configuration B
consists of a slow algorithm that takes N**2 operations to sort a list
and is run on a computer that runs 10^9 operations per second. for
smaller arrays configuration 1 is faster, but for larger arrays
configuration 2 is better. For what size of array does this transition
occur?
This CANNOT be true. It is the absolute OPPOSITE. For small N values the faster computer with N^2 is better. For very large N the slower computer with NLgN is better.
Where is the point? Well, the second computer is 1000 times faster than the first one. So they will be equal in speed when N^2 = 1000NLgN which solves to N~=14,500. So for N<14,500 then N^2 will go faster (since the computer is 1000 times faster) but for N>14,500 the slower computer will be much faster. Now imagine N=1,000,000. The faster computer will need 50 times more than what the slower computer needs because N^2 = 50,000 NLgN and it is 1000 times faster.
Note: the calculations were made using the Big O where constant factors are omitted. And the logarithm used is of the base 2. In algorithms complexity analysis we usually use LgN not LogN where LgN is log N to the base 2 and LogN is log N to the base 10.
However, referring to CLRS (good book, I recommend reading it) the Big O defines as:
Take a look at this graph for better understanding:
It is all about N > No. So all the rules of the Big O notation are valid FOR BIG VALUES OF N. For small N it is NOT necessarily correct. I mean, for N=5 it is not necessary that the Big O will give a close approximation on the running time.
I hope this gives a good answer for the question.
Reference: Chapter3, Section1, [CLRS] Introduction To Algorithms, 3rd Edition.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
What is the smallest value of n such that an algorithm whose running time is 100n^2 runs faster than an algorithm whose running time is 2^n on the same machine?
The Scope
Although I am interested in the answer, I am more interested in how to find the answer step by step (so that I can repeat the process to compare any two given algorithms if at all possible).
From the MIT Press Algorithms book
You want the values of n where 100 × n2 is less than 2 × n.
Which is the solution of 100 × n2 - 2 × n < 0, which happens to be 0 < n < 0.02.
One thousand words:
EDIT:
The original question talked about 2 × n, not 2n (see comments).
For 2n, head to https://math.stackexchange.com/questions/182156/multiplying-exponents-solving-for-n
Answer is 15
The first thing you have to know, is what running time means. If we're talking about algorithms theoretically, the running time of an algorithm is the number of steps (or the amount of time) it takes to finish depending on the size of the input (where the size of the input is for example the number of bits, but also other measures are sometimes considered). In this sense, the algorithm which requires the least number of steps is the fastest.
So in your two formulas, n is the size of the input, and 100 * n^2 and 2^n are the number of steps the two algorithms run if given an input of size n.
On first sight, the 2^n algorithm looks much faster than the 100 * n^2 algorithm. For example, for n = 4, 100*4^2 = 1600 and 2^4 = 16.
However, 2^n is an exponential function, whereas 100 * n^2 is a polynomial function. That means that when n is large enough, it will be the case that 2^n > 100 * n^2. So you will have to solve the unequality 100 * n^2 < 2^n. This will already be the case for a fairly small n, so you can just start evaluating the functions, starting at n=5, and you will reach the answer to the question in a few minutes.