What is the recurrence equation for this multiplication algorithm? - algorithm

The multiplication algorithm is for multiplying two radix r numbers:
0 <= x,y < r^n
x = x1 * r^(n/2) + x0
y = y1 * r^(n/2) + y0
where x0 is the half of x that contains the least significant digits, and x1 is the half with the most significant digits, and similarly for y.
So if r = 10 and n = 4, we have that x = 9723 = 97 * 10^2 + 23, where x1 = 97 and x0 = 23.
The multiplication can be done as:
z = x*y = x1*y1 + (x0*y1 + x1*y0) + x0*y0
So we have now four multiplications of half-sized numbers (we initially had a multiplication of n digit numbers, and now we have four multiplications of n/2 digit numbers).
As I see it the recurrence for this algorithm is:
T(n) = O(1) + 4*T(n/2)
But apparently it is T(n) = O(n) + 3T(n/2)
Either way, the solution is T(n) = O(n^2), and I can see this, but I am wondering why there is an O(n) term instead of an O(1) term?

You are right, if you'll compute the term x0*y1 + x1*y0 naively, with two products, the time complexity is quadratic. This is because we do four products and the recurrence is, as you suggest, T(n) = O(n) + 4T(n/2), which solves to O(n^2).
However, Karatsuba observed that xy=z2 * r^n + z1 * r^(n/2) + z0, where we let z2=x1*y2, z0=x0*y0, and z1=x0*y1 + x1*y0, and that one can express the last term as z1=(x1+x0)(y1+y0)-z2-z0,which involves only one product. Using this trick, the recurrence does become T(n) = O(n) + 3T(n/2) because we do three products altogether (as opposed to four if don't use the trick).
Because the numbers are of order r^n we will need n digits to represent the numbers (in general, for a fixed r>=2, we need O(log N) digits to represent the number N). To add two numbers of that order, you need to "touch" all the digits. Since there are n digits, you need O(n) (formally I'd say Omega(n), meaning "at least order of n time", but let's leave the details aside) time to compute their sum.
For example, when computing the product N*M, the number of bits n will be max(log N, log M) (assuming the base r>=2 is constant).
The algebraic trick is explained in more detail on the Wiki page for the Karatsuba algorithm.

Related

merging two sorted lists, lower bound

Using a decision tree and your answer to part (a), show that any algorithm that correctly merges two sorted lists must perform at least 2n − o(n) comparisons.
answer from part (a): 2n over n ways to divide 2n numbers into two sorted lists, each with n numbers
(2n over n) <= 2^h
h >= lg(2n)! / (n!)^2
= lg(2n!) - 2lg(n!)
= Θ(2nlg(2n)) - 2Θ(nlg(n)) <----
= Θ(2n) <----
I don't understand the last step. How can it be Θ(2n)?
You can represent logarithm of product as sum of separate logarithms (the first property here):
2*n*lg(2*n) = 2*n*(lg(2) + lg(n)) = 2*n*(1 + lg(n))
So
2*n*(1 + lg(n)) - 2*n*lg(n) =
2*n+ 2*n*lg(n)) - 2*n*lg(n) = 2*n

Time complexity T(n) for an algorithm that depends on 2 inputs

Normally when analyzing the running time of an algorithm i am dealing with a single input that affects the running time. I'm trying to understand how to represent T(n) when there are 2 or more inputs that affect the running time.
For example in linear search in the worse case:
function LinearSearch(arr, N, x)
for (i = 0; i < N; i++) ---> C1*N + C2
if arr[i] = x ---> C3*N
return true
return false ---> C4
T(n) = (C1 + C3)*N + (C2 + C4)
= CN + C
so T(n) is linear with respect to N.
Now say there was an another algorithm that took in inputs X and Y and i did a similar analysis and found the cost in the worst case to be:
T(n) = CX + C
and the best case to be:
T(n) = CY + C
My question is, is it correct to represent the running time like this? Given that there are two different inputs that affect the running time in different cases.
I've not managed to find much information online or in text books but i've been thinking about whether the n in T(n) represents all the inputs, or could it be represented like so:
T(X) = CX + C
T(Y) = CY + C
I've also seen an algorithm in a research paper described similarly to:
T(n, m) = some expression
Any help would be greatly appreciated.
Thanks
EDIT: An example of an algorithm where the time complexity depends on two inputs could be radix sort.
I understand that radix sort is often represented as O(n*k) where n is the number of elements to be sorted and k is the number of digits of the max value.
Ignoring the the exact details of T(n), how might this be represented?
If the complexity of the algorithm depends on a single parameter and you want to call that parameter X, the time complexity will also be dependent on X and not on n (what is n?): e.g. T(X) = X^2.
If the complexity of the algorithm depends on parameters n1, n2, ..., nk (and the parameters are mutually independent), then the time complexity will be a function in k parameters, T(n1, ..., nk).
For example, an algorithm that takes two strings of lengths x and y and prints them would have time complexity T(x,y) = O(x + y).

Does the asymptotic complexity of a multiplication algorithm only rely on the larger of the two operands?

I'm taking an algorithms class and I repeatedly have trouble when I'm asked to analyze the runtime of code when there is a line with multiplication or division. How can I find big-theta of multiplying an n digit number with an m digit number (where n>m)? Is it the same as multiplying two n digit numbers?
For example, right now I'm attempting to analyze the following line of code:
return n*count/100
where count is at most 100. Is the asymptotic complexity of this any different from n*n/100? or n*n/n?
You can always look up here Computational complexity of mathematical operations.
In your complexity of n*count/100 is O(length(n)) as 100 is a constant and length(count) is at most 3.
In general multiplication of two numbers n and m digits length, takes O(nm), the same time required for division. Here i assume we are talking about long division. There are many sophisticated algorithms which will beat this complexity.
To make things clearer i will provide an example. Suppose you have three numbers:
A - n digits length
B - m digits length
C - p digits length
Find complexity of the following formula:
A * B / C
Multiply first. Complexity of A * B it is O(nm) and as result we have number D, which is n+m digits length. Now consider D / C, here complexity is O((n+m)p), where overall complexity is sum of the two O(nm + (n+m)p) = O(m(n+p) + np).
Divide first. So, we divide B / C, complexity is O(mp) and we have m digits number E. Now we calculate A * E, here complexity is O(nm). Again overall complexity is O(mp + nm) = O(m(n+p)).
From the analysis you can see that it is beneficial to divide first. Of course in real life situation you would account for numerical stability as well.
From Modern Computer Arithmetic:
Assume the larger operand has size
m, and the smaller has size n ≤ m, and denote by M(m,n) the corresponding
multiplication cost.
When m is an exact multiple of n, say m = kn, a trivial strategy is to cut the
larger operand into k pieces, giving M(kn,n) = kM(n) + O(kn).
Suppose m ≥ n and n is large. To use an evaluation-interpolation scheme,
we need to evaluate the product at m + n points, whereas balanced k by k
multiplication needs 2k points. Taking k ≈ (m+n)/2, we see that M(m,n) ≤ M((m + n)/2)(1 + o(1)) as n → ∞. On the other hand, from the discussion
above, we have M(m,n) ≤ ⌈m/n⌉M(n)(1 + o(1)).

Complexity of trominoes algorithm

What is or what should be complexity of (divide and conquer) trominoes algorithm and why?
I've been given a 2^k * 2^k sized board, and one of the tiles is randomly removed making it a deficient board. The task is to fill the with "trominos" which are an L-shaped figure made of 3 tiles.
Tiling Problem
– Input: A n by n square board, with one of the 1 by 1 square
missing, where n = 2k for some k ≥ 1.
– Output: A tiling of the board using a tromino, a three square tile
obtained by deleting the upper right 1 by 1 corner from a 2 by 2
square.
– You are allowed to rotate the tromino, for tiling the board.
Base Case: A 2 by 2 square can be tiled.
Induction:
– Divide the square into 4, n/2 by n/2 squares.
– Place the tromino at the “center”, where the tromino does not
overlap the n/2 by n/2 square which was earlier missing out 1 by 1
square.
– Solve each of the four n/2 by n/2 boards inductively.
This algorithm runs in time O(n2) = O(4k). To see why, notice that your algorithm does O(1) work per grid, then makes four subcalls to grids whose width and height of half the original size. If we use n as a parameter denoting the width or height of the grid, we have the following recurrence relation:
T(n) = 4T(n / 2) + O(1)
By the Master Theorem, this solves to O(n2). Since n = 2k, we see that n2 = 4k, so this is also O(4k) if you want to use k as your parameter.
We could also let N denote the total number of squares on the board (so N = n2), in which case the subcalls are to four grids of size N / 4 each. This gives the recurrence
S(N) = 4S(N / 4) + O(1)
This solves to O(N) = O(n2), confirming the above result.
Hope this helps!
To my understanding, the complexity can be determined as follows. Let T(n) denote the number of steps needed to solve a board of side length n. From the description in the original question above, we have
T(2) = c
where c is a constant and
T(n) = 4*T(n/2) + b
where b is a constant for placing the tromino. Using the master theorem, the runtime bound is
O(n^2)
via case 1.
I'll try to offer less formal solutions but without making use of the Master theorem.
– Place the tromino at the “center”, where the tromino does not overlap the n/2 by n/2 square which was earlier missing out 1 by 1 square.
I'm guessing this is an O(1) operation? In that case, if n is the board size:
T(1) = O(1)
T(n) = 4T(n / 4) + O(1) =
= 4(4T(n / 4^2) + O(1)) + O(1) =
= 4^2T(n / 4^2) + 4*O(1) + O(1) =
= ... =
= 4^kT(n / 4^k) + 4^(k - 1)*O(1)
But n = 2^k x 2^k = 2^(2k) = (2^2)^k = 4^k, so the whole algorithm is O(n).
Note that this does not contradict #Codor's answer, because he took n to be the side length of the board, while I took it to be the entire area.
If the middle step is not O(1) but O(n):
T(n) = 4T(n / 4) + O(n) =
= 4(4*T(n / 4^2) + O(n / 4)) + O(n) =
= 4^2T(n / 4^2) + 2*O(n) =
= ... =
= 4^kT(n / 4^k) + k*O(n)
We have:
k*O(n) = n log n because 4^k = n
So the entire algorithm would be O(n log n).
You do O(1) work per tromino placed. Since there's (n^2-1)/3 trominos to place, the algorithm takes O(n^2) time.

1/x + 1/y = 1/N(factorial)

The question is, how to solve 1/x + 1/y = 1/N! (N factorial). Find the number of values that satisfy x and y for large values of N.
I've solved the problem for relatively small values of N (any N! that'll fit into a long). So, I know I solve the problem by getting all the divisors of (N!)^2. But that starts failing when (N!)^2 fails to fit into a long. I also know I can find all the divisors of N! by adding up all the prime factors of each number factored in N!. What I am missing is how I can use all the numbers in the factorial to find the x and y values.
EDIT: Not looking for the "answer" just a hint or two.
Problem : To find the count of factors of (N!)^2.
Hints :
1) You don't really need to compute (N!)^2 to find its prime factors.
Why?
Say you find the prime factorization of N! as (p1^k1) x (p2^k2) .... (pi^ki)
where pj's are primes and kj's are exponents.
Now the number of factors of N! is as obvious as
(k1 + 1) x (k2 + 1) x ... x (ki + 1).
2) For (N!)^2, the above expression would be,
(2*k1 + 1) * (2*k2 + 1) * .... * (2*k1 + 1)
which is essentially what we are looking for.
For example, lets take N=4, N! = 24 and (N!)^2 = 576;
24 = 2^3 * 3^1;
Hence no of factors = (3+1) * (1+1) = 8, viz {1,2,3,4,6,8,12,24}
For 576 = 2^6 * 3^2, it is (2*3 + 1) * (2*1 + 1) = 21;
3) Basically you need to find the multiplicity of each primes <= N here.
Please correct me if i'm wrong somewhere till here.
Here is your hint. Suppose that m = p1k1 · p2k2 · ... · pjkj. Every factor of m will have from 0 to k1 factors of p1, 0 to k2 factors of p2, and so on. Thus there are (1 + k1) · (1 + k2) · ... · (1 + kj) possible divisors.
So you need to figure out the prime factorization of n!2.
Note, this will count, for instance, 1⁄6 = 1⁄8 + 1⁄24 as being a different pair from 1⁄6 = 1⁄24 + 1⁄8. If order does not matter, add 1 and divide by 2. (The divide by 2 is because typically 2 divisors will lead to the same answer, with the add 1 for the exception that the divisor n! leads to a pair that pairs with itself.)
It's more to math than programming.
Your equation implies xy = n!(x+y).
Let c = gcd(x,y), so x = cx', y= cy', and gcd(x', y')=1.
Then c^2 x' y'=n! c (x'+y'), so cx'y' = n!(x' + y').
Now, as x' and y' are coprime, and cannot be divisible be x'+y', c should be.
So c = a(x'+y'), which gives ax'y'=n!.
To solve your problem, you should find all two coprime divisors of n!, every pair of which will give a solution as ( n!(x'+y')/y', n!(x'+y')/x').
Let F(N) be the number of (x,y) combinations that satisfy your requirements.
F(N+1) = F(N) + #(x,y) that satisfy the condition for N+1 and at least one of them (x or y) is not divisible N+1.
The intuition here is for all combinations (x,y) that work for N, (x*(N+1), y*(N+1)) would work for N+1. Also, if (x,y) is a solution for N+1 and both are divisible by n+1, then (x/(N+1),y/(N+1)) is a solution for N.
Now, I am not sure how difficult it is to find #(x,y) that work for (N+1) and at least one of them not divisible by N+1, but should be easier than solving the original problem.
Now Multiplicity or Exponent for Prime p in N! can be found by below formula:\
Exponent of P in (N!)= [N/p] + [N/(P^2)] +[N/(P^3)] + [N/(P^4)] +...............
where [x]=Step function E.g. [1.23]=integer part(1.23)=1
E.g. Exponent of 3 in 24! = [24/3] +[24/9]+ [24/27] + ... = 8 +2 +0 + 0+..=10
Now whole problem reduces to identifying prime number below N and finding its Exponent in N!

Resources