Help with question [closed] - algorithm

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Two algorithms A and B solve the same algorithmic problem, A taking n^3 seconds and B taking n days.
(i) Which algorithm is asymptotically preferable?
(ii) How large does n need to be before B takes one-quarter of the time taken by A?
How do I go about solving these?
My answer for (i) is that B is preferable as n grows at a faster rate asymptotically. Days and seconds here count as a constant and therefore do not matter as n approaches infinity.
for ii) my guess is 2 days. But wondered if others got the same

I could be completely wrong here, but I think you want this
24*60*60*n = n^3 * 1/4
which when plugged into wolphram alpha gives
587.87....
or
-587.87 in some alternate universe ;)

This looks like a brute force type of algorithm would work to give you a better feal for the problem. I would take a look at the total time it takes for the two functions to be equal and what happens above and below that threshold. Also, for good practice it may be good to look for multiple solutions.
y=n^3 seconds
y=n * (60 seconds * 60 minutes * 24 hours) seconds = n seconds per day
Then look through incrementing values of n for points where the value of y is equal between the two functions.

Related

Till what number is finding the divisors of a given number possible in under 5 seconds [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am given a number n <= 10^85, I want to compute all of the divisors for the given n.
Till what number can I compute all of the divisors in under 5 seconds?
I am aware that the execution time of whatever algorithm is chosen depends on the hardware it is run on. Assume that it is run on an avarage developers pc, I am not expecting an exact number up to which I can compute the divisors, a rough estimate is sufficient.
I looked at several algorithms for finding the divisors and convinced my self that the answer must be < 10^85.
The algorithm I looked at.
How I tried to estimate the time (I used the running time of the algorithm above)
Your first link says that it took 35 minutes for a 1.6 GHz UltraSparc III to factor a 267-bit semiprime. A 267-bit number is less than 1081, so it is four orders of magnitude smaller than 1085. The 1.6 GHz UltraSparc is not a typical desktop machine, at least not where I live, but we can suppose that msieve running on a typical desktop machine will have similar performance. Unless you feel that you can reimplement msieve several thousand times as fast, it seems unlikely that you could factor a 85-digit number in less than 5 seconds.
For what it's worth, on my core-i5 laptop msieve took 124 seconds to factor the 76-digit semiprime 1031024382763741345720693024144503046286361588371249770826450615723688608887, which is the product of two 38-digit primes. In 5 seconds, it was able to do the 62-digit semiprime 7308332279578159953175572146691794473667384671982397578861693, which is the product of a 32-digit prime and a 30-digit prime. [Note 1]
Your second link -- "how to estimate the time" -- is meaningless. The time estimate is not measured in any particular units; it is simply an indication of growth rate. If an algorithm runs in Θ(f(n)) and you compute f(n) as 1000 (or 8.61036E20), you know essentially nothing: The algorithm could take 1000 nanoseconds or it could take 1000 years.
Notes
I found those semiprimes by using msieve to factor 20 random 128-bit numbers, which gave me six primes with 30 or more decimal digits.

Fill in Cup from Coke Machine (Algorithm) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Here is an interview question, will somebody give me some hint? I am thinking about DFS or BFS, however, I cannot think out a clear solution from my head.
Three coke machines. Each one has two values min & max, which means if
you get coke from this machine it will load you a random volume in the
range [min, max]. Given a cup size n and minimum soda volume m, show
if it's possible to make it from these machines.
This is assuming you're not allowed to overflow the cup. If you can overflow it, you can always make it.
Let's mark the machines with (min1,max1),(min2,max2),(min3,max3). a1,a2 and a3 shows the amount of times you've used each machine.
We need to find a1, a2 and a3 in order to satisfy :
Condition 1 : a1*min1 + a2*min2 + a3*min3 >= m
Condition 2 : a1*max1 + a2*max2 + a3*max3 <= n
Apparently it's not required to find the most optimal way to fill the cup (minimizing a1+a2+a3) so you can simply use DFS.
You use Condition 2 as the depth limit (meaning if Condition 2 isn't fulfilled you stop going deeper in the graph) and if you ever reach Condition 1 you have yourself the answer (return yes). If you finish the search and find no answers, return no.
Seeing as this is an interview question though, I really doubt that DFS or BFS would be the way to solve it. This could easily time out with big m and n values.

Is O(nk(log(k))) algorithm same as O(n(log(k))) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I was asked to give an algorithm that was supposed to be O(n(log(k)))
k is the number of arrays and n is the total number of elements in all of these. I had to sort the arrays.
Minus the details I came up with an algorithm that does the job for in klog(k) times the total number of elements. i.e. O(nk(log(k)))
Also in this case k is much smaller than n so it wont be n^2(logn) (in case k and n were almost same)right?
Well, no, it's not the same. If k is a variable (as opposed to a constant) in the complexity expression then O(nk(log(k))) > O(n(log(k))).
That is because there is no constant C such that Cn(log(k)) > kn(log(k)) for every n, k.
The way you describe the question both k and n are input parameters. If that is the case then the answer to your question is
'No, O(n*k *log(k)) is not the same as O(n*log(k))'.
It is not that hard to see that the first one grows faster than the second one, but it is even more obvious if you fix the value of n. Consider n begin a constant say 1. Than it is more obvious that O(k*log(k)) is not the same as O(log(k)).

Do we really have case in this algorithm [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I am having trouble for solving the running time of the following algorithm
Now first my question, is the case really important here(I can not come up with 2 different inputs of the same size that are different from each other) ?
Second, I think this algorithm runs in O(n^2). Am I right?
The comment you wrote in #OBu's answer is about only a quarter right:
1*n + 2*(n-1) + 3*(n-2) + ... +n*1
That equals to:
Sum(i=1..n, i*(n-i+1)) = n*Sum(i) - Sum(i*i) + n = n*[n(n+1)/2] - [n(n+1)(2n+1)/6] + n
If you want, feel free to compute the exact formula, but the overall complexity is O(n^3).
As a rule of thumb (more like a back-of-the-envelope computation trick I've picked up... just to give you a quick idea): if you are unsure about algorithms with multiple for's (with different lengths, but all in relation with n, as you have above) try to compute how many operations are performed around the middle of the algorithm (n/2). This usually gives you an idea on how the running time complexity for the whole thing might looks like - you are basically computing the largest element in the sum, so the overall complexity is always >= than the thing you compute (in most cases it's the same though).
Just to give you some hints:
How many loops do you have and how are they nested?
How often is each loop running (start solving this from the outer to the inner loop)
If in doubt, try it with n=4 or 5 and calculate each step. After this, you'll have a good feeling for what's going on.

Optimization similar to Knapsack [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am trying to find a way to solve an Optimization problem as follows:
I have 22 different objects that can be selected more than once. I have a evaluation function f that takes the multiplicities and calculates the total value.
f is a product over fractions of linear (affine) terms and as such, differentiable and even smooth in the allowed region.
I want to optimize f with respect to the 22 variables, with the additional conditions that certain sums may not exceed certain values (for example, if a,...,v are my variables, a + e + i + m + q + s <= 9). By this, all of the variables are bounded.
If f were strictly monotonuous, this could be solved optimally by a (minimalistically modified) knapsack solution. However, the function isnt convex. That means it is even impossible to assume that if taking an object A is better than B on an empty knapsack, that this choice holds even when adding a third object C (as C could modify B's benefit to be better than A). This means that a greedy algorithm cannot be used;
Are there similar algorithms that solve such a problem in a optimal (or at least, nearly optimal) way?
EDIT: As requested, an example of what the problem is (I chose 5 variables a,b,c,d,e for simplicity)
for example,
f(a,b,c,d,e) = e*(a*0.45+b*1.2-1)/(c+d)
(Every variable only appears once, if this helps at all)
Also, for example, a+b+c=4, d+e=3
The problem is to optimize that with respect to a,b,c,d,e as integers. There is a bunch of optimization algorithms that hold for convex functions, but very few for non-convex...

Resources