Rounding of double to nearest member of an arithmetical progression? - algorithm

I have a formula of a sequence of double numbers k = a + d * n, where a and d are constant double values, n is an integer number, k >= 0, a >= 0. For example:
..., 300, 301.6, 303.2, 304.8, 306.4, ...
I want to round a given number c to a nearest value from this sequence which is lower than c.
Currently I use something like this:
double someFunc(double c) {
static double a = 1;
static double d = 2;
int n = 0;
double a1 = a;
if (c >= a) {
while (a1 < c) {
a1 += d;
}
a1 -= d;
} else {
while (a1 > c) {
a1 -= d;
}
}
return a1;
}
Is it possible to do the same without these awful cycles? I ask because the following situation may appear:
abs(a - c) >> abs(d) (the first number is much more then the second one and so a lot of iterations possible)
My question is similar to the following one. But in my case I also have a a variable which has influence on the final result. It means that a sequence may haven't number 0.

Suppose c is a number in your sequence. Then you have n = (c - a) / d.
Since you want an integer <= c, then take n = floor((c - a) / d).
Then you can round c to: a + d * floor((c - a) / d)
Suppose k = 3 + 5 * n and you round c=21.
And 3 + 5 * floor((21 - 3) / 5) = 3 + 5 * 3 = 18

Related

using while loop the page keep loading over and over (codewars problem)

heyy guys i m trying to solve Factorial decomposition (codewars task )
well some numbers work with me untill i reach number 23 the page keep looping over and over someone help me please
function decomp(n) {
let c =[]
let sum =1
for(let i=n;i>=1;i--){
sum*=i
}
let k= 2
while(k<=sum){
if(sum%k!==0){
k++}
while(sum%k==0){
c.push(k)
sum = sum/k
}
}
return c.join('*')
}
the function works good until i reach the number 23 the keep loading over and over , the tasks is about the function is decomp(n) and should return the decomposition of n! into its prime factors in increasing order of the primes, as a string.
factorial can be a very big number (4000! has 12674 digits, n can go from 300 to 4000).
In Fortran - as in any other language - the returned string is not permitted to contain any redundant trailing whitespace: you can use dynamically allocated character strings.
example
n = 12; decomp(12) -> "2^10 * 3^5 * 5^2 * 7 * 11"
since 12! is divisible by 2 ten times, by 3 five times, by 5 two times and by 7 and 11 only once.
n = 22; decomp(22) -> "2^19 * 3^9 * 5^4 * 7^3 * 11^2 * 13 * 17 * 19"
n = 25; decomp(25) -> 2^22 * 3^10 * 5^6 * 7^3 * 11^2 * 13 * 17 * 19 * 23
23! cannot be exactly expressed in double-precision floating-point format, which JavaScript uses for its numbers.
However, you don't need to compute n!. You just need to factorize each number and concat their factorization.
Actually, you don't even need to factorize each number. Note that given n and p, there are (n/p) numbers no greater than n that are multiples of p, (n/(p*p)) that are multiples of p*p, etc.
function *primes(n) {
// Sieve of Eratosthenes
const isPrime = Array(n + 1).fill(true);
isPrime[0] = isPrime[1] = false;
for (let i = 2; i <= n; i++) {
if (isPrime[i]) {
yield i;
for (let j = i * i; j <= n; j += i)
isPrime[j] = false;
}
}
}
function decomp(n) {
let s = n + '! =';
for (const p of primes(n)) {
let m = n, c = 0;
// There are (n/p) numbers no greater than n that are multiples of p
// There are (n/(p*p)) numbers no greater than n that are multiples of p*p
// ...
while (m = ((m / p) | 0)) {
c += m;
}
s += (p == 2 ? ' ' : ' * ') + p + (c == 1 ? '' : '^' + c);
}
return s;
}
console.log(decomp(12))
console.log(decomp(22))
console.log(decomp(23))
console.log(decomp(24))
console.log(decomp(25))

Change x,y from 1,1 to p,q using given rules

Given a = p, b = q
In one cycle a can change to a = a + b or b = b + a
In any cycle either of two can be performed but not both.
Starting from a = 1, b = 1
Calculate no of iterations required to convert (x, y) from (1, 1) to (p,q) using the above mentioned rules.
Return not possible if cannot be done
Can anyone tell how to solve this problem.
As already mentioned in a comment you can just go backwards. The larger element must be the one where the calculation was performed. So you could just do the reverse on the larger element and see if you end up with (1, 1). Or better subtract the smaller element directly as many times as needed from the larger one so that it becomes smaller than the other one:
function steps(a, b) {
let count = 0
while (a != b) {
console.log('(' + a + ', ' + b + ')')
let t
if (a > b) {
t = a % b == 0 ? a / b - 1 : Math.floor(a / b)
a -= t * b
} else {
t = b % a == 0 ? b / a - 1 : Math.floor(b / a)
b -= t * a
}
count += t
}
if (a == 1)
return count
return -1
}
console.log(steps(87, 13))
console.log(steps(23, 69))

Given some rounded numbers, how to find the original fraction?

After asking this question on math.stackexchange.com I figured this might be a better place after all...
I have a small list of positive numbers rounded to (say) two decimals:
1.15 (can be 1.145 - 1.154999...)
1.92 (can be 1.915 - 1.924999...)
2.36 (can be 2.355 - 2.364999...)
2.63 (can be 2.625 - 2.634999...)
2.78 (can be 2.775 - 2.784999...)
3.14 (can be 3.135 - 3.144999...)
24.04 (can be 24.035 - 24.044999...)
I suspect that these numbers are fractions of integers and that all numerators or all denominators are equal. Choosing 100 as a common denominator would work in this case, that would leave the last value as 2404/100. But there could be a 'simpler' solution with much smaller integers.
How do I efficiently find the smallest common numerator and/or denominator? Or (if that is different) the one that would result in the smallest maximum denominator resp. numerator?
Of course I could brute force for small lists/numbers and few decimals. That would find 83/72, 138/72, 170/72, 189/72, 200/72, 226/72 and 1731/72 for this example.
Assuming the numbers don't have too many significant digits and aren't too big you can try increasing the denominator until you find a valid solution. It is not just brute-forcing. Additionally the following script is staying at the number violating the constraints as long as there is nothing found, in the hope of getting the denominator higher faster, without having to calculate for the non-problematic numbers.
It works based on the following formula:
x / y < a / b if x * b < a * y
This means a denominator d is valid if:
ceil(loNum * d / loDen) * hiDen < hiNum * d
The ceil(...) part calculates the smallest possible numerator satisfying the constraint of the low boundary and the rest is checking if it also satysfies the high boundary.
Better would be to work with real integer calculations, e.g. just longs in Java, then the ceil part becomes:
(loNum * d + loDen - 1) / loDen
function findRatios(arr) {
let lo = [], hi = [], consecutive = 0, d = 1
for (let i = 0; i < arr.length; i++) {
let x = '' + arr[i], len = x.length, dot = x.indexOf('.'),
num = parseInt(x.substr(0, dot) + x.substr(dot + 1)) * 10,
den = Math.pow(10, len - dot),
loGcd = gcd(num - 5, den), hiGcd = gcd(num + 5, den)
lo[i] = {num: (num - 5) / loGcd, den: den / loGcd}
hi[i] = {num: (num + 5) / hiGcd, den: den / hiGcd}
}
for (let index = 0; consecutive < arr.length; index = (index + 1) % arr.length) {
if (!valid(d, lo[index], hi[index])) {
consecutive = 1
d++
while (!valid(d, lo[index], hi[index]))
d++
} else {
consecutive++
}
}
for (let i = 0; i < arr.length; i++)
console.log(Math.ceil(lo[i].num * d / lo[i].den) + ' / ' + d)
}
function gcd(x, y) {
while(y) {
let t = y
y = x % y
x = t
}
return x
}
function valid(d, lo, hi) {
let n = Math.ceil(lo.num * d / lo.den)
return n * hi.den < hi.num * d
}
findRatios([1.15, 1.92, 2.36, 2.63, 2.78, 3.14, 24.04])

Optimized point on line finding algorithm

I'm looking for an optimized integer-based point-on-line algorithm, where you can define the line using begin and end coordinates, and the point to find based on either an x or y input.
I know how to do this using dy/dx division but I'm looking for an algorithm that eliminates all divisions.
This is what I'm currently doing:
int mult = ((px - v0.x)<<16) / (v1.x - v0.x);
vec2 result{px, v0.y + (lerpmult*(v1.y - v0.y))>>16};
The division in the first line is the problem I'm trying to eliminate.
One trick to solve this would be using the scalar product to determine the cosine of the angle between two vectors:
def line_test(a, b, p):
v_ap = tuple(m - n for n, m in zip(a, p))
v_ab = tuple(m - n for n, m in zip(a, b))
scp = sum(m * n for m, n in zip(v_ap, v_ab))
return scp > 0 and scp * scp == sum(n * n for n in v_ap) * sum(n * n for n in v_ab) and all(m <= n for m, n in zip(v_ap, v_ab))
The parameters of the above function are the end-points of the line (a and b) and the point p (c in the image), which we want to test.
Step by step the following happens in each line:
v_ap = tuple(m - n for n, m in zip(a, p))
We calculate the vector from a to p (v_ap)
v_ab = tuple(m - n for n, m in zip(a, b))
The vector from a to b (v_ab)
scp = sum(m * n for m, n in zip(v_ap, v_ab))
In this line the scalar product of v_ap and v_ab is calculated. The result is scp = cos(v_ab, v_ap) * euclidean_length(v_ab) * euclidean_length(v_ap), where the euclidean length of a vector is defined as sqrt(sum(n * n for n in vector)) (the standard definition of the geometric length of a vector).
return scp > 0 and scp * scp == sum(n * n for n in v_ap) * sum(n * n for n in v_ab) and all(m <= n for m, n in zip(v_ap, v_ab)
This line is pretty complex, so I'll break it down into a few parts:
scp * scp == sum(n * n for n in v_ap) * sum(n * n for n in v_ab)
Since division isn't allowed, we shouldn't use the square-root either, since it's calculation usually involves divisions. So instead of calculating the square-root, we take the square of both the euclidean length of both vectors and the scalar product, thus eliminating the square-root calculation:
scp = cos(v_ab, v_ap) * euclidean_length(v_ab) * euclidean_length(v_ap) =
= cos(v_ab, v_ap) * sqrt(sum(n ^ 2 for n in v_ab)) * sqrt(sum(n ^ 2 for n in v_ap))
scp ^ 2 = cos(v_ab, v_ap) ^ 2 * sum(n ^ 2 for n in v_ab) * sum(n ^ 2 for n in v_ap)
The cosine of the angle between the two vectors should be 1, if they point in the same direction. So the square of the scalar product if the vectors share the same direction would be
euclidean_length(v_ap) ^ 2 * euclidean_length(v_ab) ^ 2
which we then compare to the actual scalar product scp.
This however leaves one problem: taking the square eliminates the sign, which we check separately with the comparison scp > 0. Since the euclidean length is always positive, only the sign of the cosine determines the value of scp. A negative value of scp means that the angle of between v_ap and v_ab is at least pi / 4 and at most pi * 3/4. However the sign of scp get's lost when squaring, which means that we can only check whether the two vectors are parallel, not if they point into the same direction. This problem is solved by checking scp > 0 in addition.
Last but not least we have to check whether the distance from a to p is shorter than the distance from a to b. This can be done by checking whether v_ap has a smaller length than v_ab. Since we already checked that the two vectors point into exactly the same direction, it is sufficient check whether all elements in v_ap are at most as large as the corresponding element in v_ab, which is done by
all(m <= n for m, n in zip(v_ap, v_ab))
The answer what you are finding is as follows:
Lets say our line equation is Ax + By + C = 0. Then we just need
this three coefficients (A, B and C).
Say this line goes through point P(P_x, P_y) and Q(Q_x, Q_y). Then
it is easy to calculate the above three coefficients.
A = P_y - Q_y,
B = Q_x - P_x,
C = - A P_x - B P_y
Once we have our line equation, we can easily calculate x or y
coordinate for given y or x respectfully.
Here is my c++ template:
#include <iostream>
using namespace std;
// point struct
struct pt {
int x, y;
};
// line struct
struct line {
int a, b, c;
// create line object
line() {}
line (pt p, pt q) {
a = p.y - q.y;
b = q.x - p.x;
c = - a * p.x - b * p.y;
}
// a > 0; is must be true otherwise runtime error will occure
int getX(int y) {
return (-b * y - c) / a;
}
// b > 0; is must be true otherwise runtime error will occure
int getY(int x) {
return (-a * x - c) / b;
}
};
int main() {
pt p, q;
p.x = 1, p.y = 2;
q.x = 3, q.y = 6;
line m = line(p, q);
cout << "for y = 4, x = " << m.getX(4) << endl;
cout << "for x = 2, y = " << m.getY(2) << endl;
return 0;
}
Output:
for y = 4, x = 2
for x = 2, y = 4
Ref: http://e-maxx.ru/algo/segments_intersection

How to find ith item in zigzag ordering?

A question last week defined the zig zag ordering on an n by m matrix and asked how to list the elements in that order.
My question is how to quickly find the ith item in the zigzag ordering? That is, without traversing the matrix (for large n and m that's much too slow).
For example with n=m=8 as in the picture and (x, y) describing (row, column)
f(0) = (0, 0)
f(1) = (0, 1)
f(2) = (1, 0)
f(3) = (2, 0)
f(4) = (1, 1)
...
f(63) = (7, 7)
Specific question: what is the ten billionth (1e10) item in the zigzag ordering of a million by million matrix?
Let's assume that the desired element is located in the upper half of the matrix. The length of the diagonals are 1, 2, 3 ..., n.
Let's find the desired diagonal. It satisfies the following property:
sum(1, 2 ..., k) >= pos but sum(1, 2, ..., k - 1) < pos. The sum of 1, 2, ..., k is k * (k + 1) / 2. So we just need to find the smallest integer k such that k * (k + 1) / 2 >= pos. We can either use a binary search or solve this quadratic inequality explicitly.
When we know the k, we just need to find the pos - (k - 1) * k / 2 element of this diagonal. We know where it starts and where we should move(up or down, depending on the parity of k), so we can find the desired cell using a simple formula.
This solution has an O(1) or an O(log n) time complexity(it depends on whether we use a binary search or solve the inequation explicitly in step 2).
If the desired element is located in the lower half of the matrix, we can solve this problem for a pos' = n * n - pos + 1 and then use symmetry to get the solution to the original problem.
I used 1-based indexing in this solution, using 0-based indexing might require adding +1 or -1 somewhere, but the idea of the solution is the same.
If the matrix is rectangular, not square, we need to consider the fact the length of diagonals look this way: 1, 2, 3, ..., m, m, m, .., m, m - 1, ..., 1(if m <= n) when we search for the k, so the sum becomes something like k * (k + 1) / 2 if k <= m and k * (k + 1) / 2 + m * (k - m) otherwise.
import math, random
def naive(n, m, ord, swap = False):
dx = 1
dy = -1
if swap:
dx, dy = dy, dx
cur = [0, 0]
for i in range(ord):
cur[0] += dy
cur[1] += dx
if cur[0] < 0 or cur[1] < 0 or cur[0] >= n or cur[1] >= m:
dx, dy = dy, dx
if cur[0] >= n:
cur[0] = n - 1
cur[1] += 2
if cur[1] >= m:
cur[1] = m - 1
cur[0] += 2
if cur[0] < 0: cur[0] = 0
if cur[1] < 0: cur[1] = 0
return cur
def fast(n, m, ord, swap = False):
if n < m:
x, y = fast(m, n, ord, not swap)
return [y, x]
alt = n * m - ord - 1
if alt < ord:
x, y = fast(n, m, alt, swap if (n + m) % 2 == 0 else not swap)
return [n - x - 1, m - y - 1]
if ord < (m * (m + 1) / 2):
diag = int((-1 + math.sqrt(1 + 8 * ord)) / 2)
parity = (diag + (0 if swap else 1)) % 2
within = ord - (diag * (diag + 1) / 2)
if parity: return [diag - within, within]
else: return [within, diag - within]
else:
ord -= (m * (m + 1) / 2)
diag = int(ord / m)
within = ord - diag * m
diag += m
parity = (diag + (0 if swap else 1)) % 2
if not parity:
within = m - within - 1
return [diag - within, within]
if __name__ == "__main__":
for i in range(1000):
n = random.randint(3, 100)
m = random.randint(3, 100)
ord = random.randint(0, n * m - 1)
swap = random.randint(0, 99) < 50
na = naive(n, m, ord, swap)
fa = fast(n, m, ord, swap)
assert na == fa, "(%d, %d, %d, %s) ==> (%s), (%s)" % (n, m, ord, swap, na, fa)
print fast(1000000, 1000000, 9999999999, False)
print fast(1000000, 1000000, 10000000000, False)
So the 10-billionth element (the one with ordinal 9999999999), and the 10-billion-first element (the one with ordinal 10^10) are:
[20331, 121089]
[20330, 121090]
An analytical solution
In the general case, your matrix will be divided in 3 areas:
an initial triangle t1
a skewed part mid where diagonals have a constant length
a final triangle t2
Let's call p the index of your diagonal run.
We want to define two functions x(p) and y(p) that give you the column and row of the pth cell.
Initial triangle
Let's look at the initial triangular part t1, where each new diagonal is one unit longer than the preceding.
Now let's call d the index of the diagonal that holds the cell, and
Sp = sum(di) for i in [0..p-1]
We have p = Sp + k, with 0 <=k <= d and
Sp = d(d+1)/2
if we solve for d, it brings
d²+d-2p = 0, a quadratic equation where we retain only the positive root:
d = (-1+sqrt(1+8*p))/2
Now we want the highest integer value closest to d, which is floor(d).
In the end, we have
p = d + k with d = floor((-1+sqrt(1+8*p))/2) and k = p - d(d+1)/2
Let's call
o(d) the function that equals 1 if d is odd and 0 otherwise, and
e(d) the function that equals 1 if d is even and 0 otherwise.
We can compute x(p) and y(p) like so:
d = floor((-1+sqrt(1+8*p))/2)
k = p - d(d+1)/2
o = d % 2
e = 1 - o
x = e*d + (o-e)*k
y = o*d + (e-o)*k
even and odd functions are used to try to salvage some clarity, but you can replace
e(p) with 1 - o(p) and have slightly more efficient but less symetric formulaes for x and y.
Middle part
let's consider the smallest matrix dimension s, i.e. s = min (m,n).
The previous formulaes hold until x or y (whichever comes first) reaches the value s.
The upper bound of p such as x(i) <= s and y(i) <= s for all i in [0..p]
(i.e. the cell indexed by p is inside the initial triangle t1) is given by
pt1 = s(s+1)/2.
For p >= pt1, diagonal length remains equal to s until we reach the second triangle t2.
when inside mid, we have:
p = s(s+1)/2 + ds + k with k in [0..s[.
which yields:
d = floor ((p - s(s+1)/2)/s)
k = p - ds
We can then use the same even/odd trick to compute x(p) and y(p):
p -= s(s+1)/2
d = floor (p / s)
k = p - d*s
o = (d+s) % 2
e = 1 - o
x = o*s + (e-o)*k
y = e*s + (o-e)*k
if (n > m)
x += d+e
y -= e
else
y += d+o
x -= o
Final triangle
Using symetry, we can calculate pt2 = m*n - s(s+1)/2
We now face nearly the same problem as for t1, except that the diagonal may run in the same direction as for t1 or in the reverse direction (if n+m is odd).
Using symetry tricks, we can compute x(p) and y(p) like so:
p = n*m -1 - p
d = floor((-1+sqrt(1+8*p))/2)
k = p - d*(d+1)/2
o = (d+m+n) % 2
e = 1 - $o;
x = n-1 - (o*d + (e-o)*k)
y = m-1 - (e*d + (o-e)*k)
Putting all together
Here is a sample c++ implementation.
I used 64 bits integers out of sheer lazyness. Most could be replaced by 32 bits values.
The computations could be made more effective by precomputing a few more coefficients.
A good part of the code could be factorized, but I doubt it is worth the effort.
Since this is just a quick and dirty proof of concept, I did not optimize it.
#include <cstdio> // printf
#include <algorithm> // min
using namespace std;
typedef long long tCoord;
void panic(const char * msg)
{
printf("PANIC: %s\n", msg);
exit(-1);
}
struct tPoint {
tCoord x, y;
tPoint(tCoord x = 0, tCoord y = 0) : x(x), y(y) {}
tPoint operator+(const tPoint & p) const { return{ x + p.x, y + p.y }; }
bool operator!=(const tPoint & p) const { return x != p.x || y != p.y; }
};
class tMatrix {
tCoord n, m; // dimensions
tCoord s; // smallest dimension
tCoord pt1, pt2; // t1 / mid / t2 limits for p
public:
tMatrix(tCoord n, tCoord m) : n(n), m(m)
{
s = min(n, m);
pt1 = (s*(s + 1)) / 2;
pt2 = n*m - pt1;
}
tPoint diagonal_cell(tCoord p)
{
tCoord x, y;
if (p < pt1) // inside t1
{
tCoord d = (tCoord)floor((-1 + sqrt(1 + 8 * p)) / 2);
tCoord k = p - (d*(d + 1)) / 2;
tCoord o = d % 2;
tCoord e = 1 - o;
x = o*d + (e - o)*k;
y = e*d + (o - e)*k;
}
else if (p < pt2) // inside mid
{
p -= pt1;
tCoord d = (tCoord)floor(p / s);
tCoord k = p - d*s;
tCoord o = (d + s) % 2;
tCoord e = 1 - o;
x = o*s + (e - o)*k;
y = e*s + (o - e)*k;
if (m > n) // vertical matrix
{
x -= o;
y += d + o;
}
else // horizontal matrix
{
x += d + e;
y -= e;
}
}
else // inside t2
{
p = n * m - 1 - p;
tCoord d = (tCoord)floor((-1 + sqrt(1 + 8 * p)) / 2);
tCoord k = p - (d*(d + 1)) / 2;
tCoord o = (d + m + n) % 2;
tCoord e = 1 - o;
x = n - 1 - (o*d + (e - o)*k);
y = m - 1 - (e*d + (o - e)*k);
}
return{ x, y };
}
void check(void)
{
tPoint move[4] = { { 1, 0 }, { -1, 1 }, { 1, -1 }, { 0, 1 } };
tPoint pos;
tCoord dir = 0;
for (tCoord p = 0; p != n * m ; p++)
{
tPoint dc = diagonal_cell(p);
if (pos != dc) panic("zot!");
pos = pos + move[dir];
if (dir == 0)
{
if (pos.y == m - 1) dir = 2;
else dir = 1;
}
else if (dir == 3)
{
if (pos.x == n - 1) dir = 1;
else dir = 2;
}
else if (dir == 1)
{
if (pos.y == m - 1) dir = 0;
else if (pos.x == 0) dir = 3;
}
else
{
if (pos.x == n - 1) dir = 3;
else if (pos.y == 0) dir = 0;
}
}
}
};
void main(void)
{
const tPoint dim[] = { { 10, 10 }, { 11, 11 }, { 10, 30 }, { 30, 10 }, { 10, 31 }, { 31, 10 }, { 11, 31 }, { 31, 11 } };
for (tPoint d : dim)
{
printf("Checking a %lldx%lld matrix...", d.x, d.y);
tMatrix(d.x, d.y).check();
printf("done\n");
}
tCoord p = 10000000000;
tMatrix matrix(1000000, 1000000);
tPoint cell = matrix.diagonal_cell(p);
printf("Coordinates of %lldth cell: (%lld,%lld)\n", p, cell.x, cell.y);
}
Results are checked against "manual" sweep of the matrix.
This "manual" sweep is a ugly hack that won't work for a one-row or one-column matrix, though diagonal_cell() does work on any matrix (the "diagonal" sweep becomes linear in that case).
The coordinates found for the 10.000.000.000th cell of a 1.000.000x1.000.000 matrix seem consistent, since the diagonal d on which the cell stands is about sqrt(2*1e10), approx. 141421, and the sum of cell coordinates is about equal to d (121090+20330 = 141420). Besides, it is also what the two other posters report.
I would say there is a good chance this lump of obfuscated code actually produces an O(1) solution to your problem.

Resources