How would i find the smallest positive integer for which algorithm B out-performs algorithm A?
A = n/4, B = 8×log2n(Base of 2 Not 2n)
A = n^3/10, B = 5×n2
A = n^2/2, B = 20×n×log2n(Base of 2 Not 2n)
A = n^4, B = 16×n2×n
It would be greatly appreciated if someone could help me find answers to these :)
You are really asking whether A(n) > B(n).
It is simple to answer to those questions:
solve the inequalities for n
You can also plot the two functions on the same plane and see how they behave and what is the relation among them. The following is for the first of your questions. As you can see it is clear from the graph when one outperforms the other.
For instance n^3/10 > 5×n^2 solves for n>50
http://www.wolframalpha.com/input/?i=n%5E3%2F10+%3E+5%C3%97n2
Consider asking this kind of questions on https://math.stackexchange.com/
Hope this helps
Related
I am trying to calculate T (n) = 2 T (n/2) + n (log n)^2.
Following the step I got:
=2^kT(n/2^k)+ nlog2 (n/2^(k-1))+ nlog2 (n/2^(k-2))+…+ n(log (n/2))^2 + n (log2 n)^2
when n=2^k I got:
But I have no idea about how to simplify the summation formula and get Θ() notation.
Any one can help? Thanks a lot
The summation you have doesn't look quite right to me. Let's re-derive it:
... after m iterations. Let's assume the stopping condition is n = 1 (without loss of generality):
... where we have employed two of the logarithm rules. As you can see the summation is in fact over "free indices" and not the logs themselves. Using the following integer power sums:
... we get:
To evaluate the Θ-notation, the highest order term is:
If you have read the Master theorem, you will realise that the question you have asked is actually the 2nd case of Master Theorem (Refer to the link above).
So, here a=2, b=2, and f(n) = 0[n^(c_crit)(log n)^k] where k=2 and c known as c_crit = log a to base b = 1.
So, by Master Theorem, T(n) = 0[(n^c_crit)(log k)^(k+1)] = 0[n(log n)^3]
For the algorithms below I need help with the following.
Algorithm Sum(m, n)
//Input: A positive integer n and another positive integer m ≤ n
//Output: ?
sum = 0
for i=m to n do
for j=1 to i do
sum = sum + 1
end for j
end for i
return sum
I need help figuring out what it computes? And what is the formula of the total number of additions sum=(sum+1).
I have The algorithm computes all of the positive integers between m and n including m and n.
The formula for the number of additions is.
m+m+1+…..+n
I don't get your questions...It seems you ask something but you also provide the answers by yourself already...anyway here's my answer to the questions...
For Q1, it seems you are asking the output and the number of total number of iteration (which is summation(m..n) = (n+m)(n-m+1)/2)
For Q2, it seems you are also asking how many times of the comparison has been performed, which is n-1 times.
To solve the recurrence T(n) = aT(n-1) + c where a,c is a constant,
by repeat substitution of n-2, n-3 ... until 1, you can find that T(n) = O(n)
PS: If it is a homework, maybe you did as you seem to have your own answer already, I strongly advice you to try go through some specific cases For Q1. For Q2 you should try to understand several methods to work out the recurrence relation, substitution method can be used to solve this kind of easy relation, many others may need to use master theorem.
Also you should be make yourself able to understand why Q2's complexity is actually the same as a normal naive for loop iterative method.
Let the general diophantine equation be :
a1*x1 + a2*x2 + .... + am*xm = n , where gcd(a1...am) = 1, (a1....am) >= 0
I want to find the number of non-negative (x1..xm) solutions.
Could someone help me with this?
Detailed mathematical explanations or algorithms will help very much.
What you are searching for is known as "smith normal form". It is explained e.g. at wikipedia: http://en.wikipedia.org/wiki/Smith_normal_form The wikipedia entry also explains the standard algorithm for this kind of problem.
In you special case this is basically the euclidian gcd algorithm.
I came across a question where it was asked to find the number of unique ways of reaching from point 1 to point 2 in a 2D co-ordinate plain.
Note: This can be assumed without loss of generality that x1 < x2 and y1 < y2.
Moreover the motions are constrained int he following manner. One can move only right or up. means a valid move is from (xa, ya) to (xb, yb) if xa < xb and ya < yb.
Mathematically, this can be found by ( [(x2-x1)+(y2-y1)]! ) / [(x2-x1)!] * [(y2-y1)!]. I have thought of code too.
I have approaches where I coded with dynamic programming and my approach takes around O([max(x2,y2)]^2) time and Theta( x2 * y2 ) where I can just manage with the upper or lower triangular matrix.
Can you think of some other approaches where running time is less than this? I am thinking of a recursive solution where the minimum running time is O(max(x2,y2)).
A simple efficient solution is the mathematical one.
Let x2-x1 = n and y2-y1 = m.
You need to take exactly n steps to the right, and m steps up, all is left to determine is their order.
This can be modeled as number of binary vectors with n+m elements with exactly n elements set to 1.
Thus, the total number of possibilities is chose(n,n+m) = (n+m)! / (n! * m!), which is exactly what you got.
Given that the mathematical answer is both proven and both faster to calculate - I see no reason for using a different solution with these restrictions.
If you are eager to use recursion here, the recursive formula for binomial coefficient will probably be a good fit here.
EDIT:
You might be looking for the multiplicative formula to calculate it.
To compute the answer, you can use this formula:
(n+m)!/(n!m!)=(n+1)*(n+2)/2*(n+3)/3*…*(n+m)/m
So the pseudo code is:
let foo(n,m) =
ans=1;
for i = 1 to m do
ans = ans*(n+i)/i;
done;
ans
The order of multiplications and divisions is important, if you modify it you can have an overflow even if the result is not so large.
I finally managed to write the article to describe this question in detail and complete the answer as well. Here is the link for the same. http://techieme.in/dynamic-programming-distinct-paths-between-two-points/
try this formula:
ans = (x2-x1) * (y2-y1) + 1;
How to find the first perfect square from the function: f(n)=An²+Bn+C? B and C are given. A,B,C and n are always integer numbers, and A is always 1. The problem is finding n.
Example: A=1, B=2182, C=3248
The answer for the first perfect square is n=16, because sqrt(f(16))=196.
My algorithm increments n and tests if the square root is a integer nunber.
This algorithm is very slow when B or C is large, because it takes n calculations to find the answer.
Is there a faster way to do this calculation? Is there a simple formula that can produce an answer?
What you are looking for are integer solutions to a special case of the general quadratic Diophantine equation1
Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0
where you have
ax^2 + bx + c = y^2
so that A = a, B = 0, C = -1, D = b, E = 0, F = c where a, b, c are known integers and you are looking for unknown x and y that satisfy this equation. Once you recognize this, solutions to this general problem are in abundance. Mathematica can do it (use Reduce[eqn && Element[x|y, Integers], x, y]) and you can even find one implementation here including source code and an explanation of the method of solution.
1: You might recognize this as a conic section. It is, and people have been studying them for thousands of years. As such, our understanding of them is very deep and your problem is actually quite famous. The study of them is an immensely deep and still active area of mathematics.