This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 8 years ago.
I am reading a book "Beginning Algorithms" by Simon Harris and James Ross.
In the early pages, there is a section on Understanding Big-O Notation. I read this section, and re-read this section maybe about a dozen times. I still am not able to wrap my head around a couple of things. I appreciate any help to get rid of my confusion.
The author / authors state "The precise number of operations is not actually that important. The complexity of an algorithm is usually defined in terms of the order of magnitude of the number of operations required to perform a function, denoted by a capital O for order of followed by an expression representing some growth relative to the size of the problem denoted by the letter N."
This really made my head hurt, and unfortunately, everything else following this paragraph made no sense to me because this paragraph is supposed to lay the foundation for the next reading.
The book does not define "order of magnitude." I googled this and the results just told me that order of magnitude are defined in powers of 10. But what does that even mean? Do you take the number of operations and define that number in a power of 10, and that equals the complexity? Also, what is considered the "size of the problem?" Is the size of the problem the number of operations? Or is the size of the problem the "order of magnitude of the number of operations required to perform a function."
Any practical examples and a proper explanation of this would really help.
Thanks!
Keep it simple!
Just think of the Big-O as a way to express the performance of the algorythm. That performance will depend on the number of elements the algorythm is handeling = n.
An example, when you have to make a sum. You need one statement for the first addition, one statement for the second addition, and so on... So the performance will be linear with the number of elements = O(n).
Imagine a sort algorythm, which is very smart, and for each element of handles it automatically shortens the sort for the next element. This will be logarithmic with the number of elements = O(log(n)).
Or a complex formula with parameters, and with each extra parameter, the execution time multiplies. This will be exponential = O(10^n).
Related
I am re-posting my question because accidentally I said that another Thread (with a similar topic) did answer my question, which it wasn't the case. I am sorry for any inconvenience
I am trying to understand how the input size, problem size and the asymptotic behavior of an arbitrary algorithm given in pseudo code format differ from each other. While I fully understand the input and the asymptotic behavior, I have problems understanding the problem size. To me it looks as if problem size= space complexity for a given problem. But I am not sure. I'd like to illustrate my confusion with the following example:
We have the following pseudo code:
ALGONE(x,y)
if x=0 or x=y then
return 1
end
return ALGONE(x-1,y-1) + ALGONE(x,y-1)
So let's say we give two inputs in $x$ and $y$ and $n$ represents the number of digits.
Since we are having addition as our main operation, and addition is an elementary operation, and for two numbers of n digits, we need n operations, then the asymptotic behavior is of the form O(n).
But what about the problem size in this case. I don't understand what am I supposed to say. The term problem-size is so vague. It depends on the algorithm but even then, even if one is able to understand the algorithm what do you give as an answer?
I'd assume that in this particular case the problem size, might be the number of bits we need to represent the input. But this is a guess of mine, grounded in nothing
I have 2 blocks of code. One with a single while loop, and the second with a for loop inside the while loop. My professor is telling me that Option 1 has an algorithm complexity of O(n) and Option 2 has an algorithm complexity of O(n^2), however can't explain why that is, other than pointing to the nested for loops. I am confused because both perform the exact same number of calculations for any given size N, which doesn't seem to be indicative that they have different algorithm complexities.
I'd like to know:
a) if my professor is correct, and how they can boast the same calculations but have different big Os.
b) if my professor is incorrect and they are the same complexity, is it O(n) or O(n^2)? Why?
I've used inline comments denoted by '#' to note the computations. Packages to deliver should be N. Self.trucks is a list. self.isWorkDayComplete is a boolean determined by whether all packages have been delivered.
Option 1:
# initializes index for fake for loop
truck_index = 0
while(not self.workDayCompleted):
# checks if truck index has reached end of self.trucks list
if(truck_index != len(self.trucks)):
# does X amount of calculations required for delivery of truck's packages
while(not self.trucks[truck_index].isEmpty()):
trucks[truck_index].travel()
trucks[truck_index].deliverPackage()
if(hub.packagesExist()):
truck[truck_index].travelToHub()
truck[truck_index].loadPackages()
# increments index
truck_index += 1
else:
# resets index to 0 for next iteration set through truck list
truck_index = 0
# does X amount of calculations required for while loop condition
self.workDayCompleted = isWorkDayCompleted()
Option 2:
while(not self.workDayCompleted):
# initializes index (i)
# each iteration checks if truck index has reached end of self.trucks list
# increments index
for i in range(len(trucks)):
# does X amount of calculations required for Delivery of truck's packages
while(not self.trucks[i].isEmpty()):
trucks[i].travel()
trucks[i].deliverPackage()
if(hub.packagesExist()):
truck[i].travelToHub()
truck[i].loadPackages()
# does X amount of calculations required for while loop condition
self.workDayCompleted = isWorkDayCompleted()
Any help is greatly appreciated, thank you!
It certainly seems like these two pieces of code are effectively implementing the same algorithm (i.e. deliver a package with each truck, then check to see if the work day is completed, repeat until the work day is completed). From this perspective you're right to be skeptical.
The question becomes: are they O(n) or O(n2)? As you've described it, this is impossible to determine because we don't know what the conditions are for the work day being completed. Is it related to the amount of work that has been done by the trucks? Without that information we have no ability to reason about when the outer loop exits. For all we know the condition is that each truck must deliver 2n packages and the complexity is actually O(n 2n).
So if your professor is right, my only guess is that there's a difference between the implementations of isWorkDayCompleted() between the two options. Barring something like that, though, the two options should have the same complexity.
Regardless, when it comes to problems like this it is always important to make sure that you're both talking about the same things:
What n means (presumably the number of trucks)
What you're counting (presumably the number of deliveries and maybe also the checks for the work day being done)
What the end state is (this is the red flag for me -- the work day being completed needs better defined)
Subsequent edits lead me to believe both of these options are O(n), since they ultimately perform one or two "travel" operations per package, depending on the number of trucks and their capacity. Given this, I think the answer to your core question (do those different control structures result in different complexity analysis) is no, they don't.
It also seems unlikely that the internals are affecting the code complexity in some important way, so my advice would be to get back together with your professor and see if they can expand on their thoughts. It very well might be that this was an oversight on their part or that they were trying to make a more subtle point about how some of the component you're using were implemented.
If you get their explanation and there is something more complex going on that you still have trouble understanding, that should probably be a separate question (perhaps linked to this one).
a) if my professor is correct, and how they can boast the same calculations but have different big Os.
Two algorithms that do the same number of "basic operations" have the same time complexity, regardless how the code is structured.
b) if my professor is incorrect and they are the same complexity, is it O(n) or O(n^2)? Why?
First you have to define: what is "n"? Is n the number of trucks? Next, does the number of "basic operations" per truck the same or does it vary in some way?
For example: If the number of operations per truck is constant C, the total number of operations is C*n. That's in the complexity class O(n).
This is a bit of a "soft question", so if this is not the appropriate place to post, please let me know.
Essentially I'm wondering how to talk about algorithms which are "equivalent" in some sense but "different" in others.
Here is a toy example. Suppose we are given a list of numbers list of length n. Two simple ways to add up the numbers in the list are given below. Obviously these methods are exactly the same in exact arithmetic, but in floating point arithmetic might give different results.
add_list_1(list,n):
sum = 0
for i=1,2,...,n:
sum += list[i]
return sum
add_list_2(list,n):
sum = 0
for i=n,...,2,1:
sum += list[i]
return sum
This is a very common thing to happen with numerical algorithms, with Gram-Schmidt vs Modified Gram Schmidt being perhaps the most well known example.
The wikipedia page for algorithms mentions "high level description", "implementation description", and "formal description".
Obviously, the implementation and formal descriptions vary, but a high level description such as "add up the list" is the same for both.
Are these different algorithms, different implementations of the same algorithm, or something else entirely? How would you describe algorithms where the high level level description is the same but the implementation is different when talking about them?
The following definition can be found on the Info for the algorithm tag.
An algorithm is a set of ordered instructions based on a formal language with the following conditions:
Finite. The number of instructions must be finite.
Executable. All instructions must be executable in some language-dependent way, in a finite amount of time.
Considering especially
set of ordered instructions based on a formal language
What this tells us is that the order of the instructions matter. While the outcome of two different algorithms might be the same, it does not imply that the algorithms are the same.
Your example of Gram-Schmidt vs. Modified Gram-Schmidt is an interesting one. Looking at the structure of each algorithm as defined here, these are indeed different algorithms, even on a high level description. The steps are in different orders.
One important distinction you need to make is between a set of instructions and the output set. Here you can find a description of three shortest path algorithms. The set of possible results based on input is the same but they are three very distinct algorithms. And they also have three completely different high level descriptions. To someone who does not care about that though these "do the same" (almost hurts me to write this) and are equivalent.
Another important distinction is the similarity of steps between to algorithms. Let's take your example and write it in a bit more formal notation:
procedure 1 (list, n):
let sum = 0
for i = 1 : n
sum = sum + list[i]
end for
sum //using implicit return
procedure 2 (list, n):
let sum = 0
for i = n : 1
sum = sum + list[i]
end for
sum //using implicit return
These two pieces of code have the same set of results but the instructions seem differently ordered. Still this is not true on a high level. It depends on how you formalise the procedures. Loops are one of those things that if we reduce them to indices they change our procedure. In this particular case though (as already pointed out in the comments), we can essentially substitute the loop for a more formalised for each loop.
procedure 3 (list):
let sum = 0
for each element in list
sum = sum + element
end for
sum
procedure 3 now does the same things as procedure 1 and procedure 2, their result is the same but the instructions again seem different. So the procedures are equivalent algorithms but not the same on the implementation level. They are not the same since the order in which the instructions for summing are executed is different for procedure 1 and procedure 2 and completely ignored in procedure 3 (it depends on your implementation of for each!).
This is where the concepts of a high level description comes in. It is the same for all three algorithms as you already pointed out. The following is from the Wikipedia article you are referring to.
1 High-level description
"...prose to describe an algorithm, ignoring the implementation details. At this level, we do not need to mention how the machine manages its tape or head."
2 Implementation description
"...prose used to define the way the Turing machine uses its head and the way that it stores data on its tape. At this level, we do not give details of states or transition function."
3 Formal description
Most detailed, "lowest level", gives the Turing machine's "state table".
Keeping this in mind your question really depends on the context it is posed in. All three procedures on a high level are the same:
1. Let sum = 0
2. For every element in list add the element to sum
3. Return sum
We do not care how we go through the list or how we sum, just that we do.
On the implementation level we already see a divergence. The procedures move differently over the "tape" but store the information in the same way. While procedure 1 moves "right" on the tape from a starting position, procedure 2 moves "left" on the tape from the "end" (careful with this because there is no such thing in a TM, it has to be defined with a different state, which we do not use in this level).
procedure 3, well it is not defined well enough to make that distinction.
On the low level we need to be very precise. I am not going down to the level of a TM state table thus please accept this rather informal procedure description.
procedure 1:
1. Move right until you hit an unmarked integer or the "end"
//In an actual TM this would not work, just for simplification I am using ints
1.e. If you hit the end terminate //(i = n)
2. Record value //(sum += list[i]) (of course this is a lot longer in an actual TM)
3. Go back until you find the first marked number
4. Go to 1.
procedure 2 would be the reverse on instructions 1. and 3., thus they are not the same.
But on these different levels are these procedures equivalent? According to Merriam Webster, I'd say they are on all levels. Their "value" or better their "output" is the same for the same input**. The issue with the communication is that these algorithms, like you already stated in your question return the same making them equivalent but not the same.
You referring to **floating point inaccuracy implies implementation level, on which the two algorithms are already different. As a mathematical model we do not have to worry about floating point inaccuracy because there is no such thing in mathematics (mathematicians live in a "perfect" world).
These algorithms are the different implementation level descriptions of the same high level description. Thus, I would refer to different implementations of the same high level algorithm since the idea is the same.
The last important distinction is the further formalisation of an algorithm by assigning it to a set for its complexity (as pointed out perfectly in the comments by #jdehesa). If you just use big omicron, well... your sets are going to be huge and make more algorithms "equivalent". This is because both merge sort and bubble sort are both members of the set O(n^2) for their time complexity (very unprecise but n^2 is an upper bound for both). Obviously bubble sort is not in O(n*log[2](n)) but this description does not specify that. If we use big theta then bubble and merge sort are not in the same set anymore, context matters. There is more to describing an algorithm than just its steps and that is one more way you can keep in mind to distinguish algorithms.
To sum up: it depends on context, especially who you are talking to. If you are comparing algorithms, make sure that you specify the level you are doing it on. To an amateur saying "add up the list" will be good enough, for your docs use a high level description, when explaining your code explain your implementation of the above high level, and when you really need to formalise your idea before putting it in code use a formal description. Latter will also allow you to prove that your program executes correctly. Of course, nowadays you do not have to write all the states of the underlying TM anymore. When you describe your algorithms, do it in the appropriate form for the setting. And if you have two different implementations of the same high level algorithm just point out the differences on the implementation level (direction of traversal, implementation of summing, format of return values etc.).
I guess, you could call it an ambiguous algorithm. Although this term may not be well defined in literature, consider your example on adding the list of elements.
It could be defined as
1. Initialize sum to zero
2. Add elements in the list to sum one by one.
3. return the sum
The second part is ambiguous, you can add them in any order as its not defined in the algorithm statement and the sum may change in floating point arithematic
One good example I came across: cornell lecture slide. That messy sandwich example is golden.
You could read what the term Ambiguity gererally refers to here wiki, Its applied in various contexts including computer science algorithms.
You may be referring to algorithms that, at least at the surface, perform the same underlying task, but have different levels of numerical stability ("robustness"). Two examples of this may be—
calculating mean and variance (where the so-called "Welford algorithm" is more numerically stable than the naive approach), and
solving a quadratic equation (with many formulas with different "robustness" to choose from).
"Equivalent" algorithms may also include algorithms that are not deterministic, or not consistent between computer systems, or both; for example, due to differences in implementation of floating-point numbers and/or floating-point math, or in the order in which parallel operations finish. This is especially problematic for applications that care about repeatable "random" number generation.
I'm teaching myself data structures, and am on a section giving a brief outline of time analysis. The following problem was given:
"Each of the following are formulas for the number of operations in some
algorithm. Express each formula in big-O notation."
The problem then goes on to give multiple scenarios. One was:
g.) The number of times that n can be divided by 10 before dropping below
1.0.
(Note: It doesn't state what n is exactly, so I'm assuming it's just some input size. But I don't think it matters in terms of how the problem is stated)
I reasoned that as this would relate to its order of magnitude, it should just be log n. However, the text says that it should be quadratic. Is there something I am missing?
Any help to help my thinking would be greatly appreciated.
I do not know a whole lot about math, so I don't know how to begin to google what I am looking for, so I rely on the intelligence of experts to help me understand what I am after...
I am trying to find the smallest string of equations for a particular large number. For example given the number
"39402006196394479212279040100143613805079739270465446667948293404245721771497210611414266254884915640806627990306816"
The smallest equation is 64^64 (that I know of) . It contains only 5 bytes.
Basically the program would reverse the math, instead of taking an expression and finding an answer, it takes an answer and finds the most simplistic expression. Simplistic is this case means smallest string, not really simple math.
Has this already been created? If so where can I find it? I am looking to take extremely HUGE numbers (10^10000000) and break them down to hopefully expressions that will be like 100 characters in length. Is this even possible? are modern CPUs/GPUs not capable of doing such big calculations?
Edit:
Ok. So finding the smallest equation takes WAY too much time, judging on answers. Is there anyway to bruteforce this and get the smallest found thus far?
For example given a number super super large. Sometimes taking the sqaureroot of number will result in an expression smaller than the number itself.
As far as what expressions it would start off it, well it would naturally try expressions that would the expression the smallest. I am sure there is tons of math things I dont know, but one of the ways to make a number a lot smaller is powers.
Just to throw another keyword in your Google hopper, see Kolmogorov Complexity. The Kolmogorov complexity of a string is the size of the smallest Turing machine that outputs the string, given an empty input. This is one way to formalize what you seem to be after. However, calculating the Kolmogorov complexity of a given string is known to be an undecidable problem :)
Hope this helps,
TJ
There's a good program to do that here:
http://mrob.com/pub/ries/index.html
I asked the question "what's the point of doing this", as I don't know if you're looking at this question from a mathemetics point of view, or a large number factoring point of view.
As other answers have considered the factoring point of view, I'll look at the maths angle. In particular, the problem you are describing is a compressibility problem. This is where you have a number, and want to describe it in the smallest algorithm. Highly random numbers have very poor compressibility, as to describe them you either have to write out all of the digits, or describe a deterministic algorithm which is only slightly smaller than the number itself.
There is currently no general mathemetical theorem which can determine if a representation of a number is the smallest possible for that number (although a lower bound can be discovered by understanding shannon's information theory). (I said general theorem, as special cases do exist).
As you said you don't know a whole lot of math, this is perhaps not a useful answer for you...
You're doing a form of lossless compression, and lossless compression doesn't work on random data. Suppose, to the contrary, that you had a way of compressing N-bit numbers into N-1-bit numbers. In that case, you'd have 2^N values to compress into 2^N-1 designations, which is an average of 2 values per designation, so your average designation couldn't be uncompressed. Lossless compression works well on relatively structured data, where data we're likely to get is compressed small, and data we aren't going to get actually grows some.
It's a little more complicated than that, since you're compressing partly by allowing more information per character. (There are a greater number of N-character sequences involving digits and operators than digits alone.) Still, you're not going to get lossless compression that, on the average, is better than just writing the whole numbers in binary.
It looks like you're basically wanting to do factoring on an arbitrarily large number. That is such a difficult problem that it actually serves as the cornerstone of modern-day cryptography.
This really appears to be a mathematics problem, and not programming or computer science problem. You should ask this on https://math.stackexchange.com/
While your question remains unclear, perhaps integer relation finding is what you are after.
EDIT:
There is some speculation that finding a "short" form is somehow related to the factoring problem. I don't believe that is true unless your definition requires a product as the answer. Consider the following pseudo-algorithm which is just sketch and for which no optimization is attempted.
If "shortest" is a well-defined concept, then in general you get "short" expressions by using small integers to large powers. If N is my integer, then I can find an integer nearby that is 0 mod 4. How close? Within +/- 2. I can find an integer within +/- 4 that is 0 mod 8. And so on. Now that's just the powers of 2. I can perform the same exercise with 3, 5, 7, etc. We can, for example, easily find the nearest integer that is simultaneously the product of powers of 2, 3, 5, 7, 11, 13, and 17, call it N_1. Now compute N-N_1, call it d_1. Maybe d_1 is "short". If so, then N_1 (expressed as power of the prime) + d_1 is the answer. If not, recurse to find a "short" expression for d_1.
We can also pick integers that are maybe farther away than our first choice; even though the difference d_1 is larger, it might have a shorter form.
The existence of an infinite number of primes means that there will always be numbers that cannot be simplified by factoring. What you're asking for is not possible, sorry.