Time Complexity and Big O Notation [closed] - algorithm

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am stuck on a homework question. The question is as follows.
Consider four programs - A, B, C, and D - that have the following performances.
A: O(log n)
B: O(n)
C: O(n2)
C: O(2n)
If each program requires 10 seconds to solve a problem of size 1000, estimate the time required by each program when the size of its problem increases to 2000.
I am pretty sure that O(n) would just double to 20 seconds since we are doubling the size and this would represent a loop in Java that iterates n number of times. Doubling n would double the output. But I am completely lost on numbers 1, 3, and 4.
I am not looking for direct answers to this question, but rather for someone to dumb down the way I can arrive at the answer. Maybe by explaining what each of these Big O notations is actually doing on the back end. If I understood the way that the algorithm is calculated and where all the elements fit into some sort of equation to solve for time, that would be awesome. Thank you in advance.
I have spent weeks combing through the textbook, but it is all written in a very complicated matter that I am having a hard time digesting. Videos online haven't been much help either.

Let's have an example (the one that you don't have in your list): O(n^3).
The ratio between the sizes of your problems is 2: 2000/1000 = 2. The big-O notation gives you an estimation that if you have a problem of size n the complexity of the problem of the size 2n would be... (2n)^3 = 8n^3. That is 8 times higher than the original task.
I hope that would help.

Related

Which algorithm would be the faster algorithm? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 12 months ago.
Improve this question
As per Big O notations, if time complexity of one algorithm is O(2^n) and the other is O(n^1000), then which would be faster one?
How to recognize overall behavior for some non-obvious cases: get logarithm of both functions.
(Sometimes we can also get ratio of the functions and evaluate ratio limit for large n's, here this approach is not good)
log(2^n) = n*log(2)
log(n^1000) = 1000*log(n)
The first result is slanted line with positive coefficient. The second one's plot is convex curve with negative second derivative, so the first function becomes larger at some big n value.
How plot looks
O(n^1000) is in the same class as (n^2) and O(n^777777777) which is Polynomial time, whereas O(2^n) is Exponential time which is way slower than Polynomial
https://www.bigocheatsheet.com/

Could you help me design an algorithm and prove for this problem please? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I am trying to find an algorithm for this problem that is given in my homework:
Assume that you have m jobs and n machines. Each machine i has a
nondecreasing latency function li : N → R that only depends on the
number of jobs assigned to machine i. To illustrate, if lj(5) = 7,
then machine j needs to work 7 units of time if it is assigned (any)
five of the jobs. Assume that li(0) = 0 for all machines i.
Given a
set of m jobs, and n machines, where each machine is associated with a
nondecreasing latency function as described above. You are asked to
give a O(m · lgn) algorithm that assigns each job to a machine such
that the makespan(the maximum amount of time any machine executes) is
minimized. Needless to say, but just in case, you need to prove that
your algorithm is correct.
I am allowed to get some help so this is not cheating.
I am stuck in where and how to start to find an algorithm for this problem, could you help me please?
O(m · lgn) is good clue.
How assigning every of m jobs can take O(logn) time? Apparently machines should be organized in some data structure with said time per operation.
Think about priority queue based on binary heap.

For inputs of size n, for which values of n does insertion-sort beat merge-sort? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
In the book Introduction to Algorithms (Corman), exercise 1.2-2 asks a the following question about comparing implementations of insertion sort and merge sort. For inputs of size n, insertion sort runs in 8n^2 steps while merge sort runs in 64n lg n steps; for which values of n does insertion sort beat merge sort?
Although I am interested in the answer, I am more interested in how to find the answer step by step (so that I can repeat the process to compare any two given algorithms if at all possible).
At first glance, this problem seems similar to something like finding the break even point in business-calculus, a class which I took more than 5 years ago, but I am not sure so any help would be appreciated.
Thank you
P/S If my tags are incorrect, this question is incorrectly categorized, or some other convention is being abused here please limit the chastising to a minimum, as this is my first time posting a question.
Since you are to find when insertion sort beats merge sort
8n^2<=64nlogn
n^2<=8nlogn
n<=8logn
On solving n-8logn = 0 you get
n = 43.411
So for n<=43 insertion sort works better than merge sort.

Log-log plot/graph of algorithm time complexity [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I just wrote the quick and merge sort algorithms and I want to make a log-log plot of their run time vs size of array to sort.
As I have never done this my question is does it matter if I choose arbitrary numbers for the array length (size of input) or should I follow a pattern (something like 10^3, 10^4, 10^5, etc)?
In general, you need to choose array lengths, for each method, that are large enough to display the expected o(n log n) or O(n^2) type behavior.
If your n is too small the run time may be dominated by other growth rates, for example an algorithm with run time = 1000000*n + n^2 will look to be ~O(n) for n < 1000. For most algorithms the small n behavior means that your log-log plot will initially be curved.
On the other hand, if your n is too large your algorithm may take too long to complete.
The best compromise may be to start with small n, and time for n, 2n, 4n,..., or n, 3n, 9n,... and keep increasing until you can clearly see the log log plots asymptoting to a straight lines.

Big Oh Notation Confusion [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm not sure if this is a problem with my understanding but this aspect of Big Oh notation seems strange to me. Say you have two algorithms - the first preforms n^2 operations and the second performs n^2-n operations. Because of the dominance of the quadratic term, both algorithms would have complexity O(n^2), yet the second algorithm will always be better than the first. That seems weird to me, Big Oh notation makes it seem like they are same. I dunno...
Big O is not about the time it takes to execute your algorithm, it is about how well it will scale when presented with large data sets (large values of n).
When presented with a large data set, the n^2 term will quickly overshadow any linear term. So the linear term becomes insignificant.
When n grows towards infinity n^2 will be much greater then n so the -n won't have any significant difference on the outcome.

Resources