Series Summation to calculate algorithm complexity - algorithm

I have an algorithm, and I need to calculate its complexity. I'm close to the answer but I have a little math problem: what is the summation formula of the series
½(n4+n3) where the pattern of n is 1, 2, 4, 8, ... so the series becomes:
½(14+13) + ½(24+23) + ½(44+43) + ½(84+83) + ...

It might help to express n as 2^k for k=0,1,2...
Substitute that into your original formula to get terms of the form (16^k + 8^k)/2.
You can break this up into two separate sums (one with base 16 and one with base 8),
each of which is a geometric series.
S1 = 1/2(16^0 + 16^1 + 16^2 + ...)
S2 = 1/2(8^0 + 8^1 + 8^2 + ...)
The J-th partial sum of a geometric series is a(1-r^J)/(1-r) where a is the initial
value and r the ratio between successive terms. For S1, a=1/2, r=16. For S2, a=1/2,
r=8.
Multiply it out and I believe you will find that the sum of the first J terms is O(16^J).

You're asking about
½ Ʃ ((2r)4+(2r)3) from r=1 to n
(Sorry for the ugly math; there's no LaTeX here.)
The result is 16/15 16n + 8/7 8n - 232/105
See http://www.wolframalpha.com/input/?i=sum+%282%5Er%29%5E4%2B%282%5Er%29%5E3+from+r%3D1+to+n .
You don't need the exact formula. All you need to know is that this is an O(16n) algorithm.

thanks to all of u.. the final formula which I was looking for (based on your works) was :
((1/15 2^(4(log2(n)+1)) + 8^(log2(n)+1)/7 -232/105)/2) + 1
this will gives the same result as the program which runs the algorithm

looks like your series does not converge ... that is, the summation is infinity. maybe your formula is wrong or you asked the question wrong.

Related

Understanding a Particular Recursive Algorithm

Which problem does the algorithm Delta below solve, where m, n >= 0 are integers?
So Im finding the algorithm very hard break down due to the nature of the nested recursion and how it calls on another recursive algorithm. If I had to guess I would say that Delta solves the LCS(longest common subsequence) problem, but Im not able to give a good explanation as to why.
Could someone help me break down the algorithm and explain the recursion and how it works?
As you found out yourself, delta computes the product of two integers.
The recursion indeed makes this confusing to look at but the best way to gain intuition is to perform the computation by hand on some example data. But by looking at the functions separately, you will find that:
Gamma is just summation. Gamma(n,m) = gamma(n, m - 1) + 1 essentially performs a naive summation where you count down the second term, while adding 1 to the first. Example:
3 + 3 =
(3 + 2) + 1 =
((3 + 1) + 1) + 1 =
(((3 + 0) + 1) + 1) + 1 =
6
Knowing this, we can simplify Delta:
Delta(n, m) = n + Delta(n, m - 1) (if m!=0, else return 0).
In the same way, we are counting down on the second factor, but instead of adding 1, we add n. This is in indeed one definition of multiplication. It is easy to understand this if you manually solve an example just like above.

How to use FFT algorithm to calculate N-degree polynomial values in N given specific points

This is basically the task i was given on the olympiads in informatic in Poland which is now over. The values should be modulo M (given). Its over now and i know i somehow need to use FFT algorithm to solve it in O(Nlog(N)) complexity.
N is a power of 2 (N <= 2^20) and (q^N mod M) = 1;
The values are powers from 1 to N of (q) which is given.For example
when q=5 and N=3, then the output should contain: F(q^1 mod M), F(q^2 mod M),F(q^3 mod M).
a1,a2...aN are given in the input (constants in the polynomial)
The brut force would be N^2, and thats too slow. I think the radix-2 algorithm fits perfectly, but i dont know how would it give me the solution as in FFT you use complex numbers.
The algorithm you would use is pretty much the same as the FFT, but you use residues mod M instead of complex numbers. If you add the additional constraints that M is prime and all the q^i are distinct mod M, then you would have a number-theoretic transform:
https://www.nayuki.io/page/number-theoretic-transform-integer-dft
But you don't strictly need those extra constraints to solve your problem.
First, because that 1-based indexing is annoying, I'm going to refer to your a[N] as a[0] instead, and I'm going to move your Nth output to the start at index 0, because it makes the following discussion so much easier.
So you want:
out[0] = a[0] + a[1] + a[2] ... a[i] ... a[N-1]
out[1] = a[0] + a[1]*q + a[2]*q^2 ... a[i]*q^i ... a[N-1]*q^(N-1)
...
out[j] = ... + a[i]*q^(ij) ...
Notice that if you have the formula for any out[j], you can make the formula for out[j]+1 by multiplying the coefficients a[...] by 1, q, q^2,... So if we have a way to calculate the even-numbered outputs, we can apply it to those modified coefficients to calculate the odd-numbered outputs.
Now, for even-numbered outputs, all the powers of q are powers of q^2, and they repeat because q^N = q^0 mod M. So, for even numbered outputs, instead of calculating:
out[j] = a[0] + a[1]*q^j + ... + a[N-1]*q^(j(N-1)) ...
we can calculate it with half the coefficients like:
out[j] = (a[0]+a[N/2]) + ... + (a[i]+a[N/2+i])^(q^2)^(ij/2) ...
And that is just the solution to your problem using q*2 and N/2 instead of q and N.
So, just like the (decimation in time version of) FFT, you solve your problem by transforming a[...] into two new sets of coefficients, each half the size, and then solve the smaller problem with q^2 and M/2 twice using those coefficients to generate the even-numbered and odd-numbered outputs respectively.
I hope that helps... I know it's tough to follow, but if you already understand how the FFT works then you can probably see how to apply it to your problem now.

Sum-of-Product of subsets

Is there a name for this operation? And: is there a closed-form expression?
For a given set of n elements, and value k between 1 and n,
Take all subsets (combinations) of k items
Find the product of each subset
Find the sum of all those products
I can express this in Python, and do the calculation pretty easily:
from operator import mul
from itertools import combinations
from functools import reduce
def sum_of_product_of_subsets(list1, k):
val = 0
for subset in combinations(list1, k):
val += reduce(mul, subset)
return val
I'm just looking for the closed form expression, so as to avoid the loop in case the set size gets big.
Note this is NOT the same as this question: Sum of the product over all combinations with one element from each group -- that question is about the sum-of-products of a Cartesian product. I'm looking for the sum-of-products of the set of combinations of size k; I don't think they are the same.
To be clear, for set(a, b, c, d), then:
k = 4 --> a*b*c*d
k = 3 --> b*c*d + a*c*d + a*b*d + a*b*c
k = 2 --> a*b + a*c + a*d + b*c + b*d + c*d
k = 1 --> a + b + c + d
Just looking for the expression; no need to supply the Python code specifically. (Any language would be illustrative, if you'd like to supply an example implementation.)
These are elementary symmetric polynomials. You can write them using summation signs as in Wikipedia. You can also use Vieta's formulas to get all of them at once as coefficients of a polynomial (up to signs)
(x-a_1)(x-a_2)...(x-a_k) =
x^k -
(a_1 + ... + a_k) x^(k-1) +
(a_1 a_2 + a_1 a_3 + ... + a_(k-1) a_k)) x^(k-2) +
... +
(-1)^k a_1 ... a_k
By expanding (x-a_1)(x-a_2)...(x-a_k) you get a polynomial time algorithm to compute all those numbers (your original implementation runs in exponential time).
Edit: Python implementation:
from itertools import izip, chain
l = [2,3,4]
x = [1]
for i in l:
x = [a + b*i for a,b in izip(chain([0],x), chain(x,[0]))]
print x
That gives you [24, 26, 9, 1], as 2*3*4=24, 2*3+2*4+3*4=26, 2+3+4=9. That last 1 is the empty product, which corresponds to k=0 in your implementation.
This should be O(N2). Using polynomial FFT you could do O(N log2 N), but I am too lazy to code that.
I have just run into the same problem elsewhere and I might have an easier solution.
Basically the closed form you are looking for is this one:
(1+e_1)*(1+e_2)*(1+e_3)*...*(1+e_n) - 1
where considering the set S={e_1, e_2, ..., e_n}
Here is why:
Let 'm' be the product of the elements of S (n=e_1*e_2*...*e_n).
If you look at the original products of elements of subsets, you can see, that all of those products are divisors of 'm'.
Now apply the Divisor function to 'm' (from now on called sigma(m) ) with one modification: consider all e_i elements as 'primes' (because we don't want them to be divided), so this gives sigma(e_i)=e_i+1 .
Then if you apply sigma to m:
sigma(m)=sigma(e_1*e_2*...*e_n)=1+[e_1+e_2+...+e_n]+[e_1*e_2+e_1*e_3+...+e_(n-1)*e_n]+[e_1*e_2*e_3+e_1*e_2*e_3+...+e_(n-2)]+...+[e_1*e_2*...*e_n]
This is what the original problem was. (Except for the 1 in the beginning).
Our divisor function is multiplicative, so the previous equation can be rewritten as following:
sigma(m)=(1+e_1)*(1+e_2)*(1+e_3)*...*(1+e_n)
There is one correction you need here. It is because of the empty subset (which is taken into account here, but in the original problem it is not present), which includes '1' in the sum (in the beginning of the firs equation).
So the closed form, what you need is:
(1+e_1)*(1+e_2)*(1+e_3)*...*(1+e_n) - 1
Sorry, I can't really code that, but I think the computation shouldn't take more than 2n-1 loops.
(You can read more about the divisor function here: http://en.wikipedia.org/wiki/Divisor_function)

Running time: Finding k smallest element using Selection Sort

I suppose answer is kn? But when I try drawing the tree, it looked like
So I must have done something wrong in the more detailed analysis?
First, your work list has length k+2 when it should probably have length k. My guess is that you meant to run from n to n-(k-1) = n-k+1.
Now if you want to sum consecutive numbers, the easiest is to remember (or derive) the formula
1 + 2 + ... + a = a(a+1)/2
Use this to figure out that the sum you're after is
n(n+1)/2 - (n-k)(n-k+1)/2 = nk + (k-k^2)/2
as you correctly found. Now, think about big O. Since n>k, we know nk > k^2, so the latter term is really a lower term, and the whole thing is O(nk).

Interview question - Finding numbers

I just got this question on a SE position interview, and I'm not quite sure how to answer it, other than brute force:
Given a natural number N, find two numbers, A and P, such that:
N = A + (A+1) + (A+2) + ... + (A+P-1)
P should be the maximum possible.
Ex: For N=14, A = 2 and P = 4
N = 2 + (2+1) + (2+2) + (4+2-1)
N = 2 + 3 + 4 + 5
Any ideas?
If N is even/odd, we need an even/odd number of odd numbers in the sum. This already halfes the number of possible solutions. E.g. for N=14, there is no point in checking any combinations where P is odd.
Rewriting the formula given, we get:
N = A + (A+1) + (A+2) + ... + (A+P-1)
= P*A + 1 + 2 + ... + (P-1)
= P*A + (P-1)P/2 *
= P*(A + (P-1)/2)
= P/2*(2*A + P-1)
The last line means that N must be divisible by P/2, this also rules out a number of possibilities. E.g. 14 only has these divisors: 1, 2, 7, 14. So possible values for P would be 2, 4, 14 and 28. 14 and 28 are ruled our for obvious reasons (in fact, any P above N/2 can be ignored).
This should be a lot faster than the brute-force approach.
(* The sum of the first n natural numbers is n(n+1)/2)
With interview questions, it is often wise to think about what is probably the purpose of the question. If I would be asking you this question, it is not because I think you know the solution, but I want to see you finding the solution. Reformulating the problem, making implications, devising what is known, ... this is what I would like to see.
If you just sit and tell me "I do not know how to solve it", you immediately fail the interview.
If you say: I know how to solve it by brute force, and I am aware it will be probably slow, I will give you some hints or help you to get you started. If that does not help, you most likely fail (unless you show some extraordinary skills to compensate for the fact you are probably lacking something in the field of general problem analysis, e.g. you will show how to implement a solution paralelized for many cores or implemented on GPU).
If you bring me a ready solution, but you are unable to derive it, I will give you another similar problem, because I am not interested about solution, I am interested in your thinking.
A + (A+1) + (A+2) + ... + (A+P-1) simplifies to P*A + (P*(P-1)/2) resp P*(A+(P-1)/2).
Thus, you could just enumerate all divisors of N, and then test each divisor P to the following:
Is A = (N-(P*(P-1)/2))/P (solved the first simplification for A) an integral number? (I assume it should be an integral number, otherwise it would be trivial.) If so, return it as a solution.
Can be solved using 0-1 Knapsack solution .
Observation : N/2 + N/2 + 1 > N
so our series is 1,2,...,N/2
Consider the constraints of W=N and vi =1 for all elements, I think this trivially maps to 0-1 knapsack, O(n^2)
Here is a O(n) solution.
It uses the property of the sum of an arithmetic progression.
S = difference*(first_term + last_term)/2
Here our sum is N, the difference is P and first term is A.
Manipulation the above equation we get some equations and we can iterate P from 1 to n - 1 to get a valid A.
def solve(n,p):
return (2*n - (p**2) + p)/(2*p)
def condition(n,p,a):
if (2*n == (2*a*p) + (p**2) - p) and (a*-1 < 0):
return True
else:
return False
def find(n):
for x in xrange(n,-1,-1):
a = solve(n,x)
if condition(n,x,a):
return n,x,a

Resources