Mathematics behind Babylonian Square Root method [closed] - algorithm

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I read the method to calculate the square root of any number and the algorithm is as follows:
double findSquareRoot(int n) {
double x = n;
double y = 1;
double e = 0.00001;
while(x-y >= e) {
x = (x+y)/2;
y = n/x;
}
return x;
}
My question regarding this method are
How it calculates the square root? I didn't understand the mathematics behind this. How x=(x+y)/2 and y=n/x converges to square root of n. Explain this mathematics.
What is the complexity of this algorithm?

It is easy to see if you do some runs and print the successive values of x and y. For example for 100:
50.5 1.9801980198019802
26.24009900990099 3.8109612300726345
15.025530119986813 6.655339226067038
10.840434673026925 9.224722348894286
10.032578510960604 9.96752728032478
10.000052895642693 9.999947104637101
10.000000000139897 9.999999999860103
See, the trick is that if x is not the square root of n, then it is above or below the real root, and n/x is always on the other side. So if you calculate the midpoint of x and n/x it will be somewhat nearer to the real root.
And about the complexity, it is actually unbounded, because the real root will never reached. That's why you have the e parameter.

This is a typical application of Newton's method for calculating the square root of n. You're calculating the limit of the sequence:
x_0 = n
x_{i+1} = (x_i + n / x_i) / 2
Your variable x is the current term x_i and your variable y is n / x_i.
To understand why you have to calculate this limit, you need to think of the function:
f(x) = x^2 - n
You want to find the root of this function. Its derivative is
f'(x) = 2 * x
and Newton's method gives you the formula:
x_{i+1} = x_i - f(x_i) / f'(x_1) = ... = (x_i + n / x_i) / 2
For completeness, I'm copying here the rationale from #rodrigo's answer, combined with my comment to it. This is helpful if you want to forget about Newton's method and try to understand this algorithm alone.
The trick is that if x is not the square root of n, then it is
an approximation which lies either above or below the real root, and y = n/x is always on the
other side. So if you calculate the midpoint of (x+y)/2, it will be
nearer to the real root than the worst of these two approximations
(x or y). When x and y are close enough, you're done.
This will also help you find the complexity of the algorithm. Say that d is the distance of the worst of the two approximations to the real root r. Then the distance between the midpoint (x+y)/2 and r is at most d/2 (it will help you if you draw a line to visualize this). This means that, with each iteration, the distance is halved. Therefore, the worst-case complexity is logarithmic w.r.t. to the distance of the initial approximation and the precision that is sought. For the given program, it is
log(|n-sqrt(n)|/epsilon)

I think all information can be found in wikipedia.
The basic idea is that if x is an overestimate to the square root of a non-negative real number S then S/x, will be an underestimate and so the average of these two numbers may reasonably be expected to provide a better approximation.
With each iteration this algorithm doubles correct digits in answer, so complexity is linear to desired accuracy's logarithm.
Why does it work? As stated here, if you will do infinite iterations you'll get some value, let's name it L. L has to satisfy equasion L = (L + N/L)/2 (as in algorithm), so L = sqrt(N). If you're worried about convergence, you may calculate squared relative errors for each iteration (Ek is error, Ak is computed value):
Ek = (Ak/sqrt(N) - 1)²
if:
Ak = (Ak-1 + N/Ak-1)/2 and Ak = sqrt(N)(sqrt(Ek) + 1)
you may derive recurrence relation for Ek:
Ek = Ek-1²/[4(sqrt(Ek-1) + 1)²]
and limit of it is 0, so limit of A1,A2... sequence is sqrt(N).

The mathematical explanation is that, over a small range, the arithmetic mean is a reasonable approximation to the geometric mean, which is used to calculate the square root. As the iterations get closer to the true square root, the difference between the arithmetic mean and the geometric mean vanishes, and the approximation gets very close. Here is my favorite version of Heron's algorithm, which first normalizes the input n over the range 1 ≤ n < 4, then unrolls the loop for a fixed number of iterations that is guaranteed to converge.
def root(n):
if n < 1: return root(n*4) / 2
if 4 <= n: return root(n/4) * 2
x = (n+1) / 2
x = (x + n/x) / 2
x = (x + n/x) / 2
x = (x + n/x) / 2
x = (x + n/x) / 2
x = (x + n/x) / 2
return x
I discuss several programs to calculate the square root at my blog.

Related

Question about properties of modulo inverse

when looking at java's BigInteger's implementation,
of exactDivideBy3,
0xAAAAAAAB is the modulo inverse of 3 (mod 2^32)
q = (w * 0xAAAAAAABL) & LONG_MASK;
result = (int) q;
let's say w is some number that can be exact divided by 3
(that is, the remainder is known to be zero),
then q is that result.
so it seemed have some number theory involved with modulo inverse
1/3(mod 2^32)=0xAAAAAAAB,
multiply both by w is
w/3(mod 2^32) = w0xAAAAAAAB
which then gives the result,
I'm confused about this property, all I can find is just (aa^-1)=1(mod n)
or some example about how to find modulo inverse,
but not like some property about multiplying it, where can I find the information?

How do you determine the average-case complexity of this algorithm?

It's usually easy to calculate the time complexity for the best case and the worst case, but when it comes to the average case especially when there's a probability p given, I don't know where to start.
Let's look at the following algorithm to compute the product of all the elements in a matrix:
int computeProduct(int[][] A, int m, int n) {
int product = 1;
for (int i = 0; i < m; i++ {
for (int j = 0; j < n; j++) {
if (A[i][j] == 0) return 0;
product = product * A[i][j];
}
}
return product;
}
Suppose p is the probability of A[i][j] being 0 (i.e. the algorithm terminates there, return 0); how do we derive the average case time complexity for this algorithm?
Let’s consider a related problem. Imagine you have a coin that flips heads with probability p. How many times, on expectation, do you need to flip the coin before it comes up heads? The answer is 1/p, since
There’s a p chance that you need one flip.
There’s a p(1-p) chance that you need two flips (the first flip has to go tails and the second has to go heads).
There’s a p(1-p)^2 chance that you need three flips (the first two flips need to go tails and the third has to go heads)
...
There’s a p(1-p)^(k-1) chance that you need k flips (the first k-1 flips need to go tails and the kth needs to go heads.)
So this means the expected value of the number of flips is
p + 2p(1 - p) + 3p(1 - p)^2 + 4p(1 - p)^3 + ...
= p(1(1 - p)^0 + 2(1 - p)^1 + 3(1 - p)^2 + ...)
So now we need to work out what this summation is. The general form is
p sum from k = 1 to infinity (k(1 - p)^k).
Rather than solving this particular summation, let's make this more general. Let x be some variable that, later, we'll set equal to 1 - p, but which for now we'll treat as a free value. Then we can rewrite the above summation as
p sum from k = 1 to infinity (kx^(k-1)).
Now for a cute trick: notice that the inside of this expression is the derivative of x^k with respect to x. Therefore, this sum is
p sum from k = 1 to infinity (d/dx x^k).
The derivative is a linear operator, so we can move it out to the front:
p d/dx sum from k = 1 to infinity (x^k)
That inner sum (x + x^2 + x^3 + ...) is the Taylor series for 1 / (1 - x) - 1, so we can simplify this to get
p d/dx (1 / (1 - x) - 1)
= p / (1 - x)^2
And since we picked x = 1 - p, this simplifies to
p / (1 - (1 - p))^2
= p / p^2
= 1 / p
Whew! That was a long derivation. But it shows that the expected number of coin tosses needed is 1/p.
Now, in your case, your algorithm can be thought of as tossing mn coins that come up heads with probability p and stopping if any of them come up heads. Surely, the expected number of coins you’d need to toss won’t be more than the case where you’re allowed to flip infinitely often, so your expected runtime is at most O(1 / p) (assuming p > 0).
If we assume that p is independent of m and n, then we can notice that at after some initial growth, each added term into our summation as we increase the number of flips is exponentially lower than the previous ones. More specifically, after adding in roughly logarithmically many terms into the sum we’ll be off from the total in the case of the infinite summation. Therefore, provided that mn is roughly larger than Θ(log p), the sum ends up being Θ(1 / p). So in a big-O sense, if mn is independent of p, the runtime is Θ(1 / p).

How to minimise integer function that's known to be U-shaped?

Let f be a function defined on the non-negative integers n ≥ 0. Suppose f is known to be U-shaped (convex and eventually increasing). How to find its minimum? That is, m such that f(m) ≤ f(n) for all n.
Examples of U-shaped functions:
n**2 - 1000*n + 100
(1 + 1/2 + ... + 1/n) + 1000/sqrt(1+n)
Of course, a human mathematician can try to minimise these particular functions using calculus. For my computer though, I want a general search algorithm that can minimise any U-shaped function.
Those functions again, in Python, to help anyone who wants to test an algorithm.
f = lambda n: n**2 - 1000*n + 100
g = lambda n: sum(1/i for i in range(1,n+1)) + 1000/sqrt(1+n)
Don't necessarily need code (of any language) in an answer, just a description of an algorithm. Would interest me though to see its answers for these specific functions.
You are probably looking for ternary search .
Ternary search will help to find f(m) as your requirement in O(logN) time , where N is number of points on the curve .
It basically takes two points m1 and m2 in range (l,r) and then recursively searches in 1/3 rd part .
code in python (from wikipedia) :
def ternarySearch(f, left, right, absolutePrecision):
while True:
#left and right are the current bounds; the maximum is between them
if abs(right - left) < absolutePrecision:
return (left + right)/2
leftThird = (2*left + right)/3
rightThird = (left + 2*right)/3
if f(leftThird) < f(rightThird):
right = rightThird
else:
left = leftThird
If your function is known to be unimodal, use Fibonacci search. http://en.wikipedia.org/wiki/Fibonacci_search_technique
For a discrete domain, the way to decide where new "test points" are probed must be slightly adapted as the formulas for the continuous domain don't yield integers. Anyway the working principle remains.
As regards the number of tests required, we have the following hierarchy:
#Fibonacci < #Golden < #Ternary < #Dichotomic
This also works. Use binary search on the derivative to maximise f' <= 0
def minimise_convex(f):
"""Given a U-shaped (convex and eventually increasing) function f, find its minimum over the non-negative integers. That is m such that f(m) <= f(n) for all n. If there exist multiple solutions, return the largest. Uses binary search on the derivative."""
f_prime = lambda n: (f(n) - f(n-1)) if n > 0 else 0
return binary_search(f_prime, 0)
Where binary search is defined
def binary_search(f, t):
"""Given an increasing function f, find the greatest non-negative integer n such that f(n) <= t. If f(n) > t for all n, return None."""

What is the most efficient algorithm to find a straight line that goes through most points?

The problem:
N points are given on a 2-dimensional plane. What is the maximum number of points on the same straight line?
The problem has O(N2) solution: go through each point and find the number of points which have the same dx / dy with relation to the current point. Store dx / dy relations in a hash map for efficiency.
Is there a better solution to this problem than O(N2)?
There is likely no solution to this problem that is significantly better than O(n^2) in a standard model of computation.
The problem of finding three collinear points reduces to the problem of finding the line that goes through the most points, and finding three collinear points is 3SUM-hard, meaning that solving it in less than O(n^2) time would be a major theoretical result.
See the previous question on finding three collinear points.
For your reference (using the known proof), suppose we want to answer a 3SUM problem such as finding x, y, z in list X such that x + y + z = 0. If we had a fast algorithm for the collinear point problem, we could use that algorithm to solve the 3SUM problem as follows.
For each x in X, create the point (x, x^3) (for now we assume the elements of X are distinct). Next, check whether there exists three collinear points from among the created points.
To see that this works, note that if x + y + z = 0 then the slope of the line from x to y is
(y^3 - x^3) / (y - x) = y^2 + yx + x^2
and the slope of the line from x to z is
(z^3 - x^3) / (z - x) = z^2 + zx + x^2 = (-(x + y))^2 - (x + y)x + x^2
= x^2 + 2xy + y^2 - x^2 - xy + x^2 = y^2 + yx + x^2
Conversely, if the slope from x to y equals the slope from x to z then
y^2 + yx + x^2 = z^2 + zx + x^2,
which implies that
(y - z) (x + y + z) = 0,
so either y = z or z = -x - y as suffices to prove that the reduction is valid.
If there are duplicates in X, you first check whether x + 2y = 0 for any x and duplicate element y (in linear time using hashing or O(n lg n) time using sorting), and then remove the duplicates before reducing to the collinear point-finding problem.
If you limit the problem to lines passing through the origin, you can convert the points to polar coordinates (angle, distance from origin) and sort them by angle. All points with the same angle lie on the same line. O(n logn)
I don't think there is a faster solution in the general case.
The Hough Transform can give you an approximate solution. It is approximate because the binning technique has a limited resolution in parameter space, so the maximum bin will give you some limited range of possible lines.
Again an O(n^2) solution with pseudo code. Idea is create a hash table with line itself as the key. Line is defined by slope between the two points, point where line cuts x-axis and point where line cuts y-axis.
Solution assumes languages like Java, C# where equals method and hashcode methods of the object are used for hashing function.
Create an Object (call SlopeObject) with 3 fields
Slope // Can be Infinity
Point of intercept with x-axis -- poix // Will be (Infinity, some y value) or (x value, 0)
Count
poix will be a point (x, y) pair. If line crosses x-axis the poix will (some number, 0). If line is parallel to x axis then poix = (Infinity, some number) where y value is where line crosses y axis.
Override equals method where 2 objects are equal if Slope and poix are equal.
Hashcode is overridden with a function which provides hashcode based on combination of values of Slope and poix. Some pseudo code below
Hashmap map;
foreach(point in the array a) {
foeach(every other point b) {
slope = calculateSlope(a, b);
poix = calculateXInterception(a, b);
SlopeObject so = new SlopeObject(slope, poix, 1); // Slope, poix and intial count 1.
SlopeObject inMapSlopeObj = map.get(so);
if(inMapSlopeObj == null) {
inMapSlopeObj.put(so);
} else {
inMapSlopeObj.setCount(inMapSlopeObj.getCount() + 1);
}
}
}
SlopeObject maxCounted = getObjectWithMaxCount(map);
print("line is through " + maxCounted.poix + " with slope " + maxCounted.slope);
Move to the dual plane using the point-line duality transform for p=(a,b) p*:y=a*x + b.
Now using a line sweep algorithm find all intersection points in NlogN time.
(If you have points which are one above the other just rotate the points to some small angle).
The intersection points corresponds in the dual plane to lines in the primer plane.
Whoever said that since 3SUM have a reduction to this problem and thus the complexity is O(n^2). Please note that the complexity of 3SUM is less than that.
Please check https://en.wikipedia.org/wiki/3SUM and also read
https://tmc.web.engr.illinois.edu/reduce3sum_sosa.pdf
As already mentioned, there probably isn't a way to solve the general case of this problem better than O(n^2). However, if you assume a large number of points lie on the same line (say the probability that a random point in the set of points lie on the line with the maximum number of points is p) and don't need an exact algorithm, a randomized algorithm is more efficient.
maxPoints = 0
Repeat for k iterations:
1. Pick 2 random, distinct points uniformly at random
2. maxPoints = max(maxPoints, number of points that lies on the
line defined by the 2 points chosen in step 1)
Note that in the first step, if you picked 2 points which lies on the line with the maximum number of points, you'll get the optimal solution. Assuming n is very large (i.e. we can treat the probability of finding 2 desirable points as sampling with replacement), the probability of this happening is p^2. Therefore the probability of finding a suboptimal solution after k iterations is (1 - p^2)^k.
Suppose you can tolerate a false negative rate rate = err. Then this algorithm runs in O(nk) = O(n * log(err) / log(1 - p^2)). If both n and p are large enough, this is significantly more efficient than O(n^2). (i.e. Supposed n = 1,000,000 and you know there are at least 10,000 points that lie on the same line. Then n^2 would required on the magnitude of 10^12 operations, while randomized algorithm would require on the magnitude of 10^9 operations to get a error rate of less than 5*10^-5.)
It is unlikely for a $o(n^2)$ algorithm to exist, since the problem (of even checking if 3 points in R^2 are collinear) is 3Sum-hard (http://en.wikipedia.org/wiki/3SUM)
This is not a solution better than O(n^2), but you can do the following,
For each point convert first convert it as if it where in the (0,0) coordinate, and then do the equivalent translation for all the other points by moving them the same x,y distance you needed to move the original choosen point.
2.Translate this new set of translated points to the angle with respect to the new (0,0).
3.Keep stored the maximum number (MSN) of points that are in each angle.
4.Choose the maximum stored number (MSN), and that will be the solution

Calculating sum of geometric series (mod m)

I have a series
S = i^(m) + i^(2m) + ............... + i^(km) (mod m)
0 <= i < m, k may be very large (up to 100,000,000), m <= 300000
I want to find the sum. I cannot apply the Geometric Progression (GP) formula because then result will have denominator and then I will have to find modular inverse which may not exist (if the denominator and m are not coprime).
So I made an alternate algorithm making an assumption that these powers will make a cycle of length much smaller than k (because it is a modular equation and so I would obtain something like 2,7,9,1,2,7,9,1....) and that cycle will repeat in the above series. So instead of iterating from 0 to k, I would just find the sum of numbers in a cycle and then calculate the number of cycles in the above series and multiply them. So I first found i^m (mod m) and then multiplied this number again and again taking modulo at each step until I reached the first element again.
But when I actually coded the algorithm, for some values of i, I got cycles which were of very large size. And hence took a large amount of time before terminating and hence my assumption is incorrect.
So is there any other pattern we can find out? (Basically I don't want to iterate over k.)
So please give me an idea of an efficient algorithm to find the sum.
This is the algorithm for a similar problem I encountered
You probably know that one can calculate the power of a number in logarithmic time. You can also do so for calculating the sum of the geometric series. Since it holds that
1 + a + a^2 + ... + a^(2*n+1) = (1 + a) * (1 + (a^2) + (a^2)^2 + ... + (a^2)^n),
you can recursively calculate the geometric series on the right hand to get the result.
This way you do not need division, so you can take the remainder of the sum (and of intermediate results) modulo any number you want.
As you've noted, doing the calculation for an arbitrary modulus m is difficult because many values might not have a multiplicative inverse mod m. However, if you can solve it for a carefully selected set of alternate moduli, you can combine them to obtain a solution mod m.
Factor m into p_1, p_2, p_3 ... p_n such that each p_i is a power of a distinct prime
Since each p is a distinct prime power, they are pairwise coprime. If we can calculate the sum of the series with respect to each modulus p_i, we can use the Chinese Remainder Theorem to reassemble them into a solution mod m.
For each prime power modulus, there are two trivial special cases:
If i^m is congruent to 0 mod p_i, the sum is trivially 0.
If i^m is congruent to 1 mod p_i, then the sum is congruent to k mod p_i.
For other values, one can apply the usual formula for the sum of a geometric sequence:
S = sum(j=0 to k, (i^m)^j) = ((i^m)^(k+1) - 1) / (i^m - 1)
TODO: Prove that (i^m - 1) is coprime to p_i or find an alternate solution for when they have a nontrivial GCD. Hopefully the fact that p_i is a prime power and also a divisor of m will be of some use... If p_i is a divisor of i. the condition holds. If p_i is prime (as opposed to a prime power), then either the special case i^m = 1 applies, or (i^m - 1) has a multiplicative inverse.
If the geometric sum formula isn't usable for some p_i, you could rearrange the calculation so you only need to iterate from 1 to p_i instead of 1 to k, taking advantage of the fact that the terms repeat with a period no longer than p_i.
(Since your series doesn't contain a j=0 term, the value you want is actually S-1.)
This yields a set of congruences mod p_i, which satisfy the requirements of the CRT.
The procedure for combining them into a solution mod m is described in the above link, so I won't repeat it here.
This can be done via the method of repeated squaring, which is O(log(k)) time, or O(log(k)log(m)) time, if you consider m a variable.
In general, a[n]=1+b+b^2+... b^(n-1) mod m can be computed by noting that:
a[j+k]==b^{j}a[k]+a[j]
a[2n]==(b^n+1)a[n]
The second just being the corollary for the first.
In your case, b=i^m can be computed in O(log m) time.
The following Python code implements this:
def geometric(n,b,m):
T=1
e=b%m
total = 0
while n>0:
if n&1==1:
total = (e*total + T)%m
T = ((e+1)*T)%m
e = (e*e)%m
n = n/2
//print '{} {} {}'.format(total,T,e)
return total
This bit of magic has a mathematical reason - the operation on pairs defined as
(a,r)#(b,s)=(ab,as+r)
is associative, and the rule 1 basically means that:
(b,1)#(b,1)#... n times ... #(b,1)=(b^n,1+b+b^2+...+b^(n-1))
Repeated squaring always works when operations are associative. In this case, the # operator is O(log(m)) time, so repeated squaring takes O(log(n)log(m)).
One way to look at this is that the matrix exponentiation:
[[b,1],[0,1]]^n == [[b^n,1+b+...+b^(n-1))],[0,1]]
You can use a similar method to compute (a^n-b^n)/(a-b) modulo m because matrix exponentiation gives:
[[b,1],[0,a]]^n == [[b^n,a^(n-1)+a^(n-2)b+...+ab^(n-2)+b^(n-1)],[0,a^n]]
Based on the approach of #braindoper a complete algorithm which calculates
1 + a + a^2 + ... +a^n mod m
looks like this in Mathematica:
geometricSeriesMod[a_, n_, m_] :=
Module[ {q = a, exp = n, factor = 1, sum = 0, temp},
While[And[exp > 0, q != 0],
If[EvenQ[exp],
temp = Mod[factor*PowerMod[q, exp, m], m];
sum = Mod[sum + temp, m];
exp--];
factor = Mod[Mod[1 + q, m]*factor, m];
q = Mod[q*q, m];
exp = Floor[ exp /2];
];
Return [Mod[sum + factor, m]]
]
Parameters:
a is the "ratio" of the series. It can be any integer (including zero and negative values).
n is the highest exponent of the series. Allowed are integers >= 0.
mis the integer modulus != 0
Note: The algorithm performs a Mod operation after every arithmetic operation. This is essential, if you transcribe this algorithm to a language with a limited word length for integers.

Resources