I am trying to write an algorithm to establish correlation between n bits integers for the value “1”.
Here is an exemple of a 5 bits integer: 0,1,0,0,1
I want to establish the percentage of correlation between this integer and a set of N other integers.
For example, Integer A(0,1,0,0,1) and Integer B(0,1,0,0,0) have a correlation of 0,5 for the value “1” as only the second bit is matching.
In my Firebase database, I have one n bits integer attached to each user_ID that I want to match against the n bits integer of every other user of my application to get a type of correlation between each user.
The distribution of the total correlations between users will follow a Gaussian curve that I want to use in the future to match users with each other.
For example: I want user A to be matched with every other user with these matches sorted by decreasing order of affinity (from high to low correlation between their n bits integers).
Do you guys have any idea how I could perform the algorithm to establish the correlation between the N number of users and then perform another algorithm to sort these correlations from high to low?
Any help would be greatly appreciated.
Thank you for your time,
Maxime
you can use the and operation to get the Result R.
Example:
A = 9 = 01001
B = 8 = 01000
C = 7 = 00111
D = 31 = 11111
R = A & B gives 8 = 01000, the correlation is counting the ones: R/A = 1/2 = 0,5.
R = A & C gives 1 = 00001, the correlation: R/A = 1/2 = 0,5.
R = A & D gives 9 = 01001, R/A = 2/2 = 1.
Here we have a problem. you can solve this by using the max of the ones occuring in the num like R/max(A,D)
I believe it is better to use the total bit count (here 5).
results would be.
corr AB = 1/5 = 0,2
corr AC = 1/5 = 0,2
corr AD = 2/5 = 0,4
corr CD = 3/5 = 0,6
Related
Problem statement:
Harry got a letter from Hogwarts School to join the wizardry. This letter consists of a magical key (6174) and a 4-digit number n with at least two distinct digits. Harry has to count in how many steps this number n can be converted to magical key invented by professor Dumbledore with this procedure as follows:
For example:
N = 1234
Step 1: x = 4321 y= 1234 => n = x-y => 3087
Step 2: x = 8730 y= 0378 => n = x-y => 8352
Step 3: x = 8532 y= 2358 => n = x-y => 6174
and you are done.
Answer is 3
I am interested in knowing the minimum number of the above operations required to get 6174
My solution which is kind of bfs gives TLE.
a = input()
ref = a
b = 0
while(ref != "6174"):
b=b+1
ref=list(ref)
num1="".join(sorted(ref))
num2="".join(sorted(ref, reverse = True))
num1=int(num1)
num2=int(num2)
diff=num2-num1
ref=str(diff)
print(b)
Is it possible to reduce time complexity?
I'm using a 64 bit LCG (MMIX (by Knuth)). It generate a certain block of random numbers inside my code, which use them to perform some operations. My code works in single core and I would like to parallelize the work to reduce the execution time.
Before start thinking to more advanced methods in this sense I'd like to simply execute more identical codes in parallel (in fact the code repeats the same task over a certain numbers of indipendent simulation, so I can simply split the number of simulation between more identical codes and run them in parallel).
My only problem now is to find a seed for each code; in particular, to avoid the possibility of unwanted non trivial correlation between data generated in different codes, I have to be sure that the random number generated in the various codes don't overlap. To do so, starting from a certain seed in the first code I have to find a way to find a value (the next seed) very distant not in absolute value but in the pseudo-random sequence (so, such that, to go from the first to the second seed, I need a huge number of steps of LCG).
My first attempt was this:
starting from the LCG relation between 2 consecutive numbers generated in the sequence
So, in principle, I could calculate the above relation with, say, n = 2^40 and I_0 equal to the value of the first seed, and obtain a new seed distant 2^40 steps in the random CLG sequence from the first one.
The problem is that, doing so, I necessary go in overflow calculating a^n. In fact for MMIX (by Knuth) a~2^62 and i use unsigned long long int (64 bit). Note that the only problem here is the fraction in the above relation. If there only were sum and multiplication I could ignore the overflow problem due to the following modular properties (in fact I'm using 2^64 as c (64 bit generator)):
So, starting from a certain value (first seed), how can I find a second one distant a huge number of step in the LC pseudo-random sequence?
[EDIT]
r3mainer solution is perfectly suited for python codes. I'm trying now to implement it in c using unsigned __int128 variables. I have only one problem: in principle I should compute:
Say, for simplicity, I want to compute:
with n = 2^40 and c(a-1)~2^126. I proceed with a cycle.Starting with temp = a, in each iteration I compute temp = temp*temp, then I compute temp%c(a-1). The problem is in the second step (temp = temp*temp). temp in fact could be, in principle any number < c(a-1)~2^126. If temp is a big number, say > 2^64, I'll go in overflow, reaching 2^128 - 1, before the next module operation. So is there a way to avoid it? For now the only solution I see is to perform each multiplication with a loop over bit, as suggested here: c code: prevent overflow in modular operation with huge modules (modules near the overflow treshold)
Is there another way to perform module operation during the multiplication?
(note that being c = 2^64, with mod(c) operation I don't have the same problem because the overflow point (for ull int variables) coincides with the module)
Any LCG of the form x[n+1] = (x[n] * a + c) % m can be skipped to an arbitrary position very quickly.
Starting with a seed value of zero, the first few iterations of the LCG will give you this sequence:
x₀ = 0
x₁ = c % m
x₂ = (c(a + 1)) % m
x₃ = (c(a² + a + 1)) % m
x₄ = (c(a³ + a² + a + 1)) % m
It's pretty easy to see that each term is actually the sum of a geometric series, which can be calculated with a simple formula:
x_n = (c(a^{n-1} + a^{n-2} + ... + a + 1)) % m
= (c * (a^n - 1) / (a - 1)) % m
The (a^n - 1) term can be calculated quickly by modular exponentiation, but dividing by (a-1) is a bit tricky because (a-1) and m are both even (i.e., not coprime), so we can't calculate the modular multiplicative inverse of (a-1) mod m directly.
Instead, calculate (a^n-1) mod m*(a-1), then perform a straightforward (non-modular) division of the result by a-1. In Python, the calculation would go something like this:
def lcg_skip(m, a, c, n):
# Calculate nth term of LCG sequence with parameters m (modulus),
# a (multiplier) and c (increment), assuming an initial seed of zero
a1 = a - 1
t = pow(a, n, m * a1) - 1
t = (t * c // a1) % m
return t
def test(nsteps):
m = 2**64
a = 6364136223846793005
c = 1442695040888963407
#
print("Calculating by brute force:")
seed = 0
for i in range(nsteps):
seed = (seed * a + c) % m
print(seed)
#
print("Calculating by fast method:")
# Calculate nth term by modular exponentiation
print(lcg_skip(m, a, c, nsteps))
test(1000000)
So to create LCGs with non-overlapping output sequences, all you would need to do is use initial seed values generated by lcg_skip() with values of n that are far enough apart.
Well, for LCG it is known property to jump forward and backward in O(log2(N)) time where N is the distance between jump points, paper by F. Brown, "Random Number Generation with Arbitrary Stride," Trans. Am. Nucl. Soc. (Nov. 1994).
It means if you have LCG parameters (a, c) satisfying Hull–Dobell Theorem, then whole period would be 264 numbers before repeating themself, and say for Nt number pf threads you make jump distance of 264 / Nt, and all threads start with the same seed and just jump after initializing LCG by (264 / Nt)*threadId, and you would be completely safe from RNG correlations due to sequences overlap.
For simplest case of common 64 unsigned modulo math, as implemented in NumPy, code below should work fine
import numpy as np
class LCG(object):
UZERO: np.uint64 = np.uint64(0)
UONE : np.uint64 = np.uint64(1)
def __init__(self, seed: np.uint64, a: np.uint64, c: np.uint64) -> None:
self._seed: np.uint64 = np.uint64(seed)
self._a : np.uint64 = np.uint64(a)
self._c : np.uint64 = np.uint64(c)
def next(self) -> np.uint64:
self._seed = self._a * self._seed + self._c
return self._seed
def seed(self) -> np.uint64:
return self._seed
def set_seed(self, seed: np.uint64) -> np.uint64:
self._seed = seed
def skip(self, ns: np.int64) -> None:
"""
Signed argument - skip forward as well as backward
The algorithm here to determine the parameters used to skip ahead is
described in the paper F. Brown, "Random Number Generation with Arbitrary Stride,"
Trans. Am. Nucl. Soc. (Nov. 1994). This algorithm is able to skip ahead in
O(log2(N)) operations instead of O(N). It computes parameters
A and C which can then be used to find x_N = A*x_0 + C mod 2^M.
"""
nskip: np.uint64 = np.uint64(ns)
a: np.uint64 = self._a
c: np.uint64 = self._c
a_next: np.uint64 = LCG.UONE
c_next: np.uint64 = LCG.UZERO
while nskip > LCG.UZERO:
if (nskip & LCG.UONE) != LCG.UZERO:
a_next = a_next * a
c_next = c_next * a + c
c = (a + LCG.UONE) * c
a = a * a
nskip = nskip >> LCG.UONE
self._seed = a_next * self._seed + c_next
#%%
np.seterr(over='ignore')
seed = np.uint64(1)
rng64 = LCG(seed, np.uint64(6364136223846793005), np.uint64(1))
print(rng64.next())
print(rng64.next())
print(rng64.next())
#%%
rng64.skip(-3) # back by 3
print(rng64.next())
print(rng64.next())
print(rng64.next())
rng64.skip(-3) # back by 3
rng64.skip(2) # forward by 2
print(rng64.next())
Tested in Python 3.9.1, x64 Win 10
for an input matrix
in = [1 1;
1 2;
1 3;
1 4;
2 5;
2 6;
2 7;
3 8;
3 9;
3 10;
3 11];
i want to get the output matrix
out = [1 5 8;
2 6 9;
3 7 10;
4 0 11];
meaning i want to reshape the second input column into an output matrix, where all values corresponding to one value in the first input column are written into one column of the output matrix.
As there can be different numbers of entries for each value in the first input column (here 4 values for "1" and "3", but only 3 for "2"), the normal reshape function is not applicable. I need to pad all columns to the maximum number of rows.
Do you have an idea how to do this matlab-ish?
The second input column can only contain positive numbers, so the padding values can be 0, -x, NaN, ...
The best i could come up with is this (loop-based):
maxNumElem = 0;
for i=in(1,1):in(end,1)
maxNumElem = max(maxNumElem,numel(find(in(:,1)==i)));
end
out = zeros(maxNumElem,in(end,1)-in(1,1));
for i=in(1,1):in(end,1)
tmp = in(in(:,1)==i,2);
out(1:length(tmp),i) = tmp;
end
Either of the following approaches assumes that column 1 of in is sorted, as in the example. If that's not the case, apply this initially to sort in according to that criterion:
in = sortrows(in,1);
Approach 1 (using accumarray)
Compute the required number of rows, using mode;
Use accumarray to gather the values corresponding to each column, filled with zeros at the end. The result is a cell;
Concatenate horizontally the contents of all cells.
Code:
[~, n] = mode(in(:,1)); %//step 1
out = accumarray(in(:,1), in(:,2), [], #(x){[x; zeros(n-numel(x),1)]}); %//step 2
out = [out{:}]; %//step 3
Alternatively, step 1 could be done with histc
n = max(histc(in(:,1), unique(in(:,1)))); %//step 1
or with accumarray:
n = max(accumarray(in(:,1), in(:,2), [], #(x) numel(x))); %//step 1
Approach 2 (using sparse)
Generate a row-index vector using this answer by #Dan, and then build your matrix with sparse:
a = arrayfun(#(x)(1:x), diff(find([1,diff(in(:,1).'),1])), 'uni', 0); %//'
out = full(sparse([a{:}], in(:,1), in(:,2)));
Introduction to proposed solution and Code
Proposed here is a bsxfun based masking approach that uses the binary operators available as builtins for use with bsxfun and as such I would consider this very appropriate for problems like this. Of course, you must also be aware that bsxfun is a memory hungry tool. So, it could pose a threat if you are dealing with maybe billions of elements depending also on the memory available for MATLAB's usage.
Getting into the details of the proposed approach, we get the counts of each ID from column-1 of the input with histc. Then, the magic happens with bsxfun + #le to create a mask of positions in the output array (initialized by zeros) that are to be filled by the column-2 elements from input. That's all you need to tackle the problem with this approach.
Solution Code
counts = histc(in(:,1),1:max(in(:,1)))'; %//' counts of each ID from column1
max_counts = max(counts); %// Maximum counts for each ID
mask = bsxfun(#le,[1:max_counts]',counts); %//'# mask of locations where
%// column2 elements are to be placed
out = zeros(max_counts,numel(counts)); %// Initialize the output array
out(mask) = in(:,2); %// place the column2 elements in the output array
Benchmarking (for performance)
The benchmarking presented here compares the proposed solution in this post against the various methods presented in Luis's solution. This skips the original loopy approach presented in the problem as it appeared to be very slow for the input generated in the benchmarking code.
Benchmarking Code
num_ids = 5000;
counts_each_id = randi([10 100],num_ids,1);
num_runs = 20; %// number of iterations each approach is run for
%// Generate random input array
in = [];
for k = 1:num_ids
in = [in ; [repmat(k,counts_each_id(k),1) rand(counts_each_id(k),1)]];
end
%// Warm up tic/toc.
for k = 1:50000
tic(); elapsed = toc();
end
disp('------------- With HISTC + BSXFUN Masking approach')
tic
for iter = 1:num_runs
counts = histc(in(:,1),1:max(in(:,1)))';
max_counts = max(counts);
out = zeros(max_counts,numel(counts));
out(bsxfun(#le,[1:max_counts]',counts)) = in(:,2);
end
toc
clear counts max_counts out
disp('------------- With MODE + ACCUMARRAY approach')
tic
for iter = 1:num_runs
[~, n] = mode(in(:,1)); %//step 1
out = accumarray(in(:,1), in(:,2), [], #(x){[x; zeros(n-numel(x),1)]}); %//step 2
out = [out{:}];
end
toc
clear n out
disp('------------- With HISTC + ACCUMARRAY approach')
tic
for iter = 1:num_runs
n = max(histc(in(:,1), unique(in(:,1))));
out = accumarray(in(:,1), in(:,2), [], #(x){[x; zeros(n-numel(x),1)]}); %//step 2
out = [out{:}];
end
toc
clear n out
disp('------------- With ARRAYFUN + Sparse approach')
tic
for iter = 1:num_runs
a = arrayfun(#(x)(1:x), diff(find([1,diff(in(:,1).'),1])), 'uni', 0); %//'
out = full(sparse([a{:}], in(:,1), in(:,2)));
end
toc
clear a out
Results
------------- With HISTC + BSXFUN Masking approach
Elapsed time is 0.598359 seconds.
------------- With MODE + ACCUMARRAY approach
Elapsed time is 2.452778 seconds.
------------- With HISTC + ACCUMARRAY approach
Elapsed time is 2.579482 seconds.
------------- With ARRAYFUN + Sparse approach
Elapsed time is 1.455362 seconds.
slightly better, but still uses a loop :(
out=zeros(4,3);%set to zero matrix
for i = 1:max(in(:,1)); %find max in column 1, and loop for that number
ind = find(in(:,1)==i); %
out(1: size(in(ind,2),1),i)= in(ind,2);
end
don't know if you can avoid the loop...
How can i generate a random number between A = 1 and B = 10 where each number has a different probability?
Example: number / probability
1 - 20%
2 - 20%
3 - 10%
4 - 5%
5 - 5%
...and so on.
I'm aware of some hard-coded workarounds which unfortunately are of no use with larger ranges, for example A = 1000 and B = 100000.
Assume we have a
Rand()
method which returns a random number R, 0 < R < 1, can anyone post a code sample with a proper way of doing this ? prefferable in c# / java / actionscript.
Build an array of 100 integers and populate it with 20 1's, 20 2's, 10 3's, 5 4's, 5 5's, etc. Then just randomly pick an item from the array.
int[] numbers = new int[100];
// populate the first 20 with the value '1'
for (int i = 0; i < 20; ++i)
{
numbers[i] = 1;
}
// populate the rest of the array as desired.
// To get an item:
// Since your Rand() function returns 0 < R < 1
int ix = (int)(Rand() * 100);
int num = numbers[ix];
This works well if the number of items is reasonably small and your precision isn't too strict. That is, if you wanted 4.375% 7's, then you'd need a much larger array.
There is an elegant algorithm attributed by Knuth to A. J. Walker (Electronics Letters 10, 8 (1974), 127-128; ACM Trans. Math Software 3 (1977), 253-256).
The idea is that if you have a total of k * n balls of n different colors, then it is possible to distribute the balls in n containers such that container no. i contains balls of color i and at most one other color. The proof is by induction on n. For the induction step pick the color with the least number of balls.
In your example n = 10. Multiply the probabilities with a suitable m such that they are all integers. So, maybe m = 100 and you have 20 balls of color 0, 20 balls of color 1, 10 balls of color 2, 5 balls of color 3, etc. So, k = 10.
Now generate a table of dimension n with each entry being a probability (the ration of balls of color i vs the other color) and the other color.
To generate a random ball, generate a random floating-point number r in the range [0, n). Let i be the integer part (floor of r) and x the excess (r – i).
if (x < table[i].probability) output i
else output table[i].other
The algorithm has the advantage that for each random ball you only make a single comparison.
Let me work out an example (same as Knuth).
Consider simulating throwing a pair of dice.
So P(2) = 1/36, P(3) = 2/36, P(4) = 3/36, P(5) = 4/36, P(6) = 5/36, P(7) = 6/36, P(8) = 5/36, P(9) = 4/36, P(10) = 3/36, P(11) = 2/36, P(12) = 1/36.
Multiply by 36 * 11 to get 393 balls, 11 of color 2, 22 of color 3, 33 of color 4, …, 11 of color 12.
We have k = 393 / 11 = 36.
Table[2] = (11/36, color 4)
Table[12] = (11/36, color 10)
Table[3] = (22/36, color 5)
Table[11] = (22/36, color 5)
Table[4] = (8/36, color 9)
Table[10] = (8/36, color 6)
Table[5] = (16/36, color 6)
Table[9] = (16/36, color 8)
Table[6] = (7/36, color 8)
Table[8] = (6/36, color 7)
Table[7] = (36/36, color 7)
Assuming that you have a function p(n) that gives you the desired probability for a random number:
r = rand() // a random number between 0 and 1
for i in A to B do
if r < p(i)
return i
r = r - p(i)
done
A faster way is to create an array of (B - A) * 100 elements and populate it with numbers from A to B such that the ratio of the number of each item occurs in the array to the size of the array is its probability. You can then generate a uniform random number to get an index to the array and directly access the array to get your random number.
Map your uniform random results to the required outputs according to the probabilities.
E.g., for your example:
If `0 <= Round() <= 0.2`: result = 1.
If `0.2 < Round() <= 0.4`: result = 2.
If `0.4 < Round() <= 0.5`: result = 3.
If `0.5 < Round() <= 0.55`: result = 4.
If `0.55 < Round() <= 0.65`: result = 5.
...
Here's an implementation of Knuth's Algorithm. As discussed by some of the answers it works by
1) creating a table of summed frequencies
2) generates a random integer
3) rounds it with ceiling function
4) finds the "summed" range within which the random number falls and outputs original array entity based on it
Inverse Transform
In probability speak, a cumulative distribution function F(x) returns the probability that any randomly drawn value, call it X, is <= some given value x. For instance, if I did F(4) in this case, I would get .6. because the running sum of probabilities in your example is {.2, .4, .5, .55, .6, .65, ....}. I.e. the probability of randomly getting a value less than or equal to 4 is .6. However, what I actually want to know is the inverse of the cumulative probability function, call it F_inv. I want to know what is the x value given the cumulative probability. I want to pass in F_inv(.6) and get back 4. That is why this is called the inverse transform method.
So, in the inverse transform method, we are basically trying to find the interval in the cumulative distribution in which a random Uniform (0,1) number falls. This works out to the algorithm that perreal and icepack posted. Here is another way to state it in terms of the cumulative distribution function
Generate a random number U
for x in A .. B
if U <= F(x) then return x
Note that it might be more efficient to have the loop go from B to A and check if U >= F(x) if the smaller probabilities come at the beginning of the distribution
How do I determine the square root of a floating point number? Is the Newton-Raphson method a good way? I have no hardware square root either. I also have no hardware divide (but I have implemented floating point divide).
If possible, I would prefer to reduce the number of divides as much as possible since they are so expensive.
Also, what should be the initial guess to reduce the total number of iterations???
Thank you so much!
When you use Newton-Raphson to compute a square-root, you actually want to use the iteration to find the reciprocal square root (after which you can simply multiply by the input--with some care for rounding--to produce the square root).
More precisely: we use the function f(x) = x^-2 - n. Clearly, if f(x) = 0, then x = 1/sqrt(n). This gives rise to the newton iteration:
x_(i+1) = x_i - f(x_i)/f'(x_i)
= x_i - (x_i^-2 - n)/(-2x_i^-3)
= x_i + (x_i - nx_i^3)/2
= x_i*(3/2 - 1/2 nx_i^2)
Note that (unlike the iteration for the square root), this iteration for the reciprocal square root involves no divisions, so it is generally much more efficient.
I mentioned in your question on divide that you should look at existing soft-float libraries, rather than re-inventing the wheel. That advice applies here as well. This function has already been implemented in existing soft-float libraries.
Edit: the questioner seems to still be confused, so let's work an example: sqrt(612). 612 is 1.1953125 x 2^9 (or b1.0011001 x 2^9, if you prefer binary). Pull out the even portion of the exponent (9) to write the input as f * 2^(2m), where m is an integer and f is in the range [1,4). Then we will have:
sqrt(n) = sqrt(f * 2^2m) = sqrt(f)*2^m
applying this reduction to our example gives f = 1.1953125 * 2 = 2.390625 (b10.011001) and m = 4. Now do a newton-raphson iteration to find x = 1/sqrt(f), using a starting guess of 0.5 (as I noted in a comment, this guess converges for all f, but you can do significantly better using a linear approximation as an initial guess):
x_0 = 0.5
x_1 = x_0*(3/2 - 1/2 * 2.390625 * x_0^2)
= 0.6005859...
x_2 = x_1*(3/2 - 1/2 * 2.390625 * x_1^2)
= 0.6419342...
x_3 = 0.6467077...
x_4 = 0.6467616...
So even with a (relatively bad) initial guess, we get rapid convergence to the true value of 1/sqrt(f) = 0.6467616600226026.
Now we simply assemble the final result:
sqrt(f) = x_n * f = 1.5461646...
sqrt(n) = sqrt(f) * 2^m = 24.738633...
And check: sqrt(612) = 24.738633...
Obviously, if you want correct rounding, careful analysis needed to ensure that you carry sufficient precision at each stage of the computation. This requires careful bookkeeping, but it isn't rocket science. You simply keep careful error bounds and propagate them through the algorithm.
If you want to correct rounding without explicitly checking a residual, you need to compute sqrt(f) to a precision of 2p + 2 bits (where p is precision of the source and destination type). However, you can also take the strategy of computing sqrt(f) to a little more than p bits, square that value, and adjust the trailing bit by one if necessary (which is often cheaper).
sqrt is nice in that it is a unary function, which makes exhaustive testing for single-precision feasible on commodity hardware.
You can find the OS X soft-float sqrtf function on opensource.apple.com, which uses the algorithm described above (I wrote it, as it happens). It is licensed under the APSL, which may or not be suitable for your needs.
Probably (still) the fastest implementation for finding the inverse square root and the 10 lines of code that I adore the most.
It's based on Newton Approximation, but with a few quirks. There's even a great story around this.
Easiest to implement (you can even implement this in a calculator):
def sqrt(x, TOL=0.000001):
y=1.0
while( abs(x/y -y) > TOL ):
y= (y+x/y)/2.0
return y
This is exactly equal to newton raphson:
y(new) = y - f(y)/f'(y)
f(y) = y^2-x and f'(y) = 2y
Substituting these values:
y(new) = y - (y^2-x)/2y = (y^2+x)/2y = (y+x/y)/2
If division is expensive you should consider: http://en.wikipedia.org/wiki/Shifting_nth-root_algorithm .
Shifting algorithms:
Let us assume you have two numbers a and b such that least significant digit (equal to 1) is larger than b and b has only one bit equal to (eg. a=1000 and b=10). Let s(b) = log_2(b) (which is just the location of bit valued 1 in b).
Assume we already know the value of a^2. Now (a+b)^2 = a^2 + 2ab + b^2. a^2 is already known, 2ab: shift a by s(b)+1, b^2: shift b by s(b).
Algorithm:
Initialize a such that a has only one bit equal to one and a^2<= n < (2*a)^2.
Let q=s(a).
b=a
sqra = a*a
For i = q-1 to -10 (or whatever significance you want):
b=b/2
sqrab = sqra + 2ab + b^2
if sqrab > n:
continue
sqra = sqrab
a=a+b
n=612
a=10000 (16)
sqra = 256
Iteration 1:
b=01000 (8)
sqrab = (a+b)^2 = 24^2 = 576
sqrab < n => a=a+b = 24
Iteration 2:
b = 4
sqrab = (a+b)^2 = 28^2 = 784
sqrab > n => a=a
Iteration 3:
b = 2
sqrab = (a+b)^2 = 26^2 = 676
sqrab > n => a=a
Iteration 4:
b = 1
sqrab = (a+b)^2 = 25^2 = 625
sqrab > n => a=a
Iteration 5:
b = 0.5
sqrab = (a+b)^2 = 24.5^2 = 600.25
sqrab < n => a=a+b = 24.5
Iteration 6:
b = 0.25
sqrab = (a+b)^2 = 24.75^2 = 612.5625
sqrab < n => a=a
Iteration 7:
b = 0.125
sqrab = (a+b)^2 = 24.625^2 = 606.390625
sqrab < n => a=a+b = 24.625
and so on.
A good approximation to square root on the range [1,4) is
def sqrt(x):
y = x*-0.000267
y = x*(0.004686+y)
y = x*(-0.034810+y)
y = x*(0.144780+y)
y = x*(-0.387893+y)
y = x*(0.958108+y)
return y+0.315413
Normalise your floating point number so the mantissa is in the range [1,4), use the above algorithm on it, and then divide the exponent by 2. No floating point divisions anywhere.
With the same CPU time budget you can probably do much better, but that seems like a good starting point.