total path probability of weighted circular graph - algorithm

Let's say there is a game where for each move there are probable paths, depending on the throw of fancy dice. Depending on the results there might be transitions forward, backward or staying on one place. Eventually (even after infinite amount of throws) the graph leads to final states. Each edge is weighted with probability .
For the case where there are no loops I can simply sum+multiply and re-normalize the probabilities for each outcome if I start at the same vertex(cell).
However, if I have loops it starts getting confusing. For example, let's say we have same probability on each edge:
start0
/\ ^
/ \ |
end1 tr2
/
end2
the graph starts at start0 and has 50% chance of terminating at end1 or going to transition tr2. From tr2 there is again 50% chance terminating at end2 or going back to start0.
How could I calculate the total probability for reaching each stop end1 and end2. If I try using a converging series like this:
pEnd1=1/2 + 1/2*1/2+1/8+.. ->lim->1. which makes no sense since end2 is getting no probability. Obviously I have a mistake there.
So my question is how can I calculate the probabilities of reaching the final nodes if I have the probabilities of each edge but I could have loops.
example 1) simple fork with a loop all edges are 50% probable
start0-> p=50% ->end1
start0-> p=50% ->tr1
tr2-> p=50% ->start0
tr2-> p=50% ->end2
example 2) more loops
start0-> p=1/3 ->e1
start0-> p=1/3 ->tr1
start0-> p=1/3 ->start0
tr1-> p=1/3 ->tr2
tr1-> p=2/3 ->start0
tr2-> p=7/9 ->start0
tr2-> p=2/9 ->end2
example 3) - degenerate test case - since all paths end at e1 - it should end up having 100%
probability
start0-> p=1/3 ->e1
start0-> p=2/3 ->tr1
tr1-> p=3/4 ->start0
tr2-> p=1/4 ->e1

This is not really a programming problem, although you could write a simulation and perform it 100000 times to see what the distribution would be.
You wrote:
pEnd1=1/2 + 1/2*1/2+1/8+.. ->lim->1. which makes no sense since end2 is getting no probability. Obviously I have a mistake there.
Indeed, there is a mistake. You did not take into account the probability to go from tr2 to start0 (50%). The probability that the path will cycle once to start0 and then end up in end1 is 1/2 (going to tr2) * 1/2 (going back to start0) * 1/2 (going to end1). The number of decisions (of 50%) is always odd when ending up in end1. And it is even when ending up in end2. So the formula would be:
pEnd1 = 2-1 + 2-3 + 2-5 + ... -> lim = 2/3
pEnd2 = 2-2 + 2-4 + 2-6 + ... -> lim = 1/3
Simulation
To make this a programming question, here is a simulation in JavaScript:
function run(transitions, state) {
while (transitions[state][state] != 1) { // not in sink
let probs = transitions[state];
let rnd = Math.random(); // in range [0, 1)
for (let i = 0; i < probs.length; i++) {
rnd -= probs[i];
if (rnd < 0) {
state = i; // transition
break;
}
}
}
return state;
}
// Define graph
let names = ["start0", "end1", "tr2", "end2"]
let transitions = [
[0.0, 0.5, 0.5, 0.0],
[0.0, 1.0, 0.0, 0.0], // sink
[0.5, 0.0, 0.0, 0.5],
[0.0, 0.0, 0.0, 1.0] // sink
];
// Start sampling
let numResults = [0, 0, 0, 0];
let numSamples = 0;
setInterval(function () {
let endstate = run(transitions, 0);
numSamples++;
numResults[endstate]++;
document.querySelector("#" + names[endstate]).textContent = (100 * numResults[endstate] / numSamples).toFixed(4) + "%";
}, 1);
<div>Arriving in end1: <span id="end1"></span></div>
<div>Arriving in end2: <span id="end2"></span></div>
You may also like to read about Absorbing Markov chains. From that we learn that the "absorbing probabilities" matrix B can be calculated with the formula:
B = NR
Where:
N is the "fundamental matrix" (I - Q)⁻¹     
I is the identity matrix of the same shape as Q
Q is the probability matrix for transitions between non-final states
R is the probability matrix for transitions to final states
Here is a script (including the relevant matrix functions) to calculate B for the example problem you depicted:
// Probabilities to go from one non-final state to another
let Q = [
[0.0, 0.5], // from start0 to [start0, tr2]
[0.5, 0.0] // from tr2 to [tr2, start0]
];
// Probabilities to go from one non-final state to a final one
let R = [
[0.5, 0.0], // from start0 to [end1, end2]
[0.0, 0.5] // from tr2 to [end1, end2]
];
// See https://en.wikipedia.org/wiki/Absorbing_Markov_chain#Absorbing_probabilities
let N = inversed(sum(identity(Q.length), scalarProduct(Q, -1)));
let B = product(N, R);
console.log("B = (I-Q)⁻¹R:\n" + str(B));
// Generic matrix utility functions:
// cofactor is copy of given matrix without given column and given row
function cofactor(a, y, x) {
return a.slice(0, y).concat(a.slice(y+1)).map(row => row.slice(0, x).concat(row.slice(x+1)));
}
function determinant(a) {
return a.length == 1 ? a[0][0] : a.reduceRight((sum, row, y) =>
a[y][0] * determinant(cofactor(a, y, 0)) - sum
, 0);
}
function adjoint(a) {
return a.length == 1 ? [[1]] : transposed(a.map((row, y) =>
row.map((_, x) => ((x + y) % 2 ? -1 : 1) * determinant(cofactor(a, y, x)))
));
}
function transposed(a) {
return a[0].map((_, x) => a.map((_, y) => a[y][x]));
}
function scalarProduct(a, coeff) {
return a.map((row, y) => row.map((val, x) => val * coeff));
}
function inversed(a) {
return scalarProduct(adjoint(a), 1 / determinant(a));
}
function product(a, b) {
return a.map((rowa) =>
b[0].map((_, x) =>
b.reduce((sum, rowb, z) =>
sum + rowa[z] * rowb[x]
, 0)
)
);
}
function sum(a, b) {
return a.map((row, y) => row.map((val, x) => val + b[y][x]));
}
function identity(length) {
return Array.from({length}, (_, y) =>
Array.from({length}, (_, x) => +(y == x))
);
}
function str(a) {
return a.map(row => JSON.stringify(row)).join("\n");
}
The output is:
[
[2/3, 1/3] // probabilities when starting in start0 and ending in [end1, end2]
[1/3, 2/3] // probabilities when starting in tr2 and ending in [end1, end2]
]

You are describing a discrete-time discrete-state-space Absorbing Markov Chain.
In your example, end 1 and end2 are absorbing states.
The referenced Wikipedia article describes how to calculate absorbing probabilities (or absorption probabilities).
See also here and here.

Related

how to fast judge two double set intersect or not?

I want to fast judge two double set intersect or not.
problem
The element in set can be all range. The element in set are not ordered. Each set have 100,000+ element.
If exist a double a from set A, a double b from set B, a and b is very close,for example abs(a-b)<1e-6, we say set A and B intersect.
My way
calculate the range(lower bound and upper bound) of set_A and set_B
O(n), n is set's size
calculate range intersection rang_intersect of range_A and range_B
O(1)
if rang_intersect empty two set not intersect.
O(1)
if range_intersect not empty, find sub_set_A from set_A which in the range_intersect, find sub_set_B from set_B which in the range_intersect
O(n)
sort sub_set_A and sub_set_B
O(mlogm) m is sub_set_A's size
tranvers sub_set_A_sorted and sub_set_B_sorted by two pointer. find if exist element close, if exist two set intersect, if not, two set not intersect.
O(m)
My way can works, but I wonder if I can do faster.
Appendix
Why I want this:
Actually I am face a problem to judge two point set A & B collision or not. Each point p in point set have a double coordinate x,y,z. If exist a point a from point set A, a point b from point set B, a and b's coordinate very close, we say point set A and B collision.
In 3d case, we can define the order of point by first compare x then compare y, last compare z.
We can define the close that if all dimension's coordinate is close , the two point close.
This problem can convert to the problem above.
Some idea by gridding the space:
Let's take the point (1.2, 2.4, 3.6) with minimial distance required 1.
We may say that this point "touches" 8 unit cubes of R^3
[
(1.0, 2.0, 3.5)
(1.0, 2.0, 4.0)
(1.0, 2.5, 3.5) // 1 < 1.2 < 1.5
(1.0, 2.5, 4.0) // 2 < 2.4 < 2.5
(1.5, 2.0, 3.5) // 3.5 < 3.6 < 4
(1.5, 2.0, 4.0)
(1.5, 2.5, 3.5)
(1.5, 2.5, 4.0)
]
If two points are close to each other, their will be connected by some of their cube.
y
^
|
3 +---+---+
| | |
2.5+-------+---+---+
| a | | c | b |
2 +---+---+---+---+--->x
1 1.5 2
In example above in 2D plan, a is (1.2, 2.4).
Say b is (2.5, 2.4). b will touch the square (2,2), but a does not.
So they are not connected (indeed the min distance possible is (2.5-1.5===1).
Say c is (2.45, 2.4). c touches the square (1.5, 2). So is a. We check.
The main idea is to associate to each point its 8 cubes.
We can associate a uniq hash to each cube: the top level coordinate. e.g "{x}-{y}-{z}"
To check if A intersects B:
we build for each point of A its 8 hashes and store them in a hashmap: hash->point
for each point of B, we build the hashes, and if one of those exist in the hashmap we check if the corresponding points are in relation
Now consider
y
^
|
3 +---+---+
| a2| |
2.5+-------+
| a1| |
2 +---+---+
1 1.5 2
a2 and a1 's hashes will overlap on squares (1,2) and (1,2.5). So the hashmap is actually hash->points.
This implies that worst case could be O(n^2) if all the points land into the same cubes. Hopefully in reality they won't?
Below a code with irrelevant data:
(put 10**4 to avoid freezing the ui)
function roundEps (c, nth) {
const eps = 10**-nth
const r = (c % eps)
const v = (r >= eps / 2) ? [c-r+eps/2, c-r+eps] : [c-r, c-r+eps/2]
return v.map(x => x.toFixed(nth + 1))
}
function buildHashes (p, nth) {
return p.reduce((hashes, c) => {
const out = []
hashes.forEach(hash => {
const [l, u] = roundEps(c, nth)
out.push(`${hash},${l}`, `${hash},${u}`)
})
return out
},[''])
}
function buildMap (A, nth) {
const hashToPoints = new Map()
A.forEach(p => {
const hashes = buildHashes(p, nth)
hashes.forEach(hash => {
const v = hashToPoints.get(hash) || []
v.push(p)
hashToPoints.set(hash, v)
})
})
return hashToPoints
}
function intersects (m, b, nth, R) {
let processed = new Set()
return buildHashes(b, nth).some(hash => {
if (!m.has(hash)) return
const pts = m.get(hash)
if (processed.has(pts)) return
processed.add(pts)
return pts.some(p => R(p, b))
})
}
function d (a, b) {
return a.reduce((dist, x, i) => {
return Math.max(dist, Math.abs(x-b[i]))
}, 0)
}
function checkIntersection (A, B, nth=2) {
const m = buildMap(A, nth)
return B.some(b => intersects(m, b, nth, (a,b) => d(a, b) < 10**(-nth)))
}
// ephemeral testing :)
/*
function test () {
const assert = require('assert')
function testRound () {
assert.deepEqual(roundEps(127.857, 2), ['127.855', '127.860'])
assert.deepEqual(roundEps(127.853, 2), ['127.850', '127.855'])
assert.deepEqual(roundEps(127.855, 2), ['127.855', '127.860'])
}
function testD () {
assert.strictEqual(d([1,2,3],[5,1,2]), 4)
assert.strictEqual(d([1,2,3],[0,1,2]), 1)
}
function testCheckIntersection () {
{
const A = [[1.213,2.178,1.254],[0.002,1.231,2.695]]
const B = [[1.213,2.178,1.254],[0.002,1.231,2.695]]
assert(checkIntersection(A, B))
}
{
const A = [[1.213,2.178,1.254],[0.002,1.231,2.695]]
const B = [[10,20,30]]
assert(!checkIntersection(A, B))
}
{
const A = [[0,0,0]]
const B = [[0,0,0.06]]
assert(!checkIntersection(A, B, 2))
}
{
const A = [[0,0,0.013]]
const B = [[0,0,0.006]]
assert(checkIntersection(A, B, 2))
}
}
testRound()
testD()
testCheckIntersection()
}*/
const A = []
const B = []
for (let i = 0; i < 10**4; ++i) {
A.push([Math.random(), Math.random(), Math.random()])
B.push([Math.random(), Math.random(), Math.random()])
}
console.time('start')
console.log('intersect? ', checkIntersection(A, B, 6))
console.timeEnd('start')

Find 'average' with equal upper and lower distance to values of a given set

I recently encountered the following
problem:
Given a set of points with height yᵢ, find the height of the line for which the average distance to points above equals the average distance to points below the line:
More abstract definition: Given a set of real valued data points Y = {y1, ..., yn}, find ȳ which splits Y into two sets Y⁺ = {y ∊ Y : y > ȳ} and Y⁻ = {y ∊ Y : y < ȳ} so that the average distance between ȳ and elements of Y⁺ equals the average distance between ȳ and elements of Y⁻.
Naive solution: Initialize ȳ with the average of Y, compute average upper and lower distances and iteratively move up or down depending on whether the upper or lower average distance is greater.
Question: This problem is pretty basic, so there is probably a better solution (?) Even a non-iterative algebraic algorithm?
As mentioned in the comment, if you know which points are above and below the line, then you can solve it like this:
a = number of points above the line
b = number of points below the line
sa = sum of all y above the line
sb = sum of all y below the line
Now we can create the following equation:
(sa - a * y) / a = (b * y - sb) / b | * a * b
sa * b - a * b * y = a * b * y - a * sb | + a * b * y + a * sb
sa * b + a * sb = 2 * a * b * y | / (2 * a * b)
==> y = (a * sb + b * sa) / (2 * a * b)
= sa / (2 * a) + sb / (2 * b)
= (sa / a + sb / b) / 2
If we interprete the result then we could say it is the average between the averages of the points above and below the line.
An iterative solution based on maraca's answer:
Initialize ȳ with the mean of the given values.
Split the given values into those above and below ȳ.
Calculate the new optimal ȳ for this split.
Repeat until ȳ converges.
This is slightly faster than the algorithm outlined in the question.
// Find mean with equal average distance to upper and lower values:
function findEqualAverageDistanceMean(values) {
let mean = values.reduce((a, b) => a + b) / values.length,
last = NaN;
// Iteratively equalize average distances:
while (last != mean) {
let lower_total = 0,
lower_n = 0,
upper_total = 0,
upper_n = 0;
for (let value of values) {
if (value > mean) {
upper_total += value;
++upper_n;
} else if (value < mean) {
lower_total += value;
++lower_n;
}
}
last = mean;
mean = (upper_total / upper_n + lower_total / lower_n) / 2;
}
return mean;
}
// Example:
let canvas = document.getElementById("canvas"),
ctx = canvas.getContext("2d"),
points = Array.from({length: 100}, () => Math.random() ** 4),
mean = points.reduce((a, b) => a + b) / points.length,
equalAverageDistanceMean = findEqualAverageDistanceMean(points);
function draw(points, mean, equalAverageDistanceMean) {
for (let [i, point] of points.entries()) {
ctx.fillStyle = (point < equalAverageDistanceMean) ? 'red' : 'green';
ctx.fillRect(i * canvas.width / points.length, canvas.height * point, 3, 3);
}
ctx.fillStyle = 'black';
ctx.fillRect(0, canvas.height * mean, canvas.width, .5);
ctx.fillRect(0, canvas.height * equalAverageDistanceMean, canvas.width, 3);
}
draw(points, mean, equalAverageDistanceMean);
<canvas id="canvas" width="400" height="200">

Solving linear equations

I have to find out the integral solution of a equation ax+by=c such that x>=0 and y>=0 and value of (x+y) is minimum.
I know if c%gcd(a,b)}==0 then it's always possible. How to find the values of x and y?
My approach
for(i 0 to 2*c):
x=i
y= (c-a*i)/b
if(y is integer)
ans = min(ans,x+y)
Is there any better way to do this ? Having better time complexity.
Using the Extended Euclidean Algorithm and the theory of linear Diophantine equations there is no need to search. Here is a Python 3 implementation:
def egcd(a,b):
s,t = 1,0 #coefficients to express current a in terms of original a,b
x,y = 0,1 #coefficients to express current b in terms of original a,b
q,r = divmod(a,b)
while(r > 0):
a,b = b,r
old_x, old_y = x,y
x,y = s - q*x, t - q*y
s,t = old_x, old_y
q,r = divmod(a,b)
return b, x ,y
def smallestSolution(a,b,c):
d,x,y = egcd(a,b)
if c%d != 0:
return "No integer solutions"
else:
u = a//d #integer division
v = b//d
w = c//d
x = w*x
y = w*y
k1 = -x//v if -x % v == 0 else 1 + -x//v #k1 = ceiling(-x/v)
x1 = x + k1*v # x + k1*v is solution with smallest x >= 0
y1 = y - k1*u
if y1 < 0:
return "No nonnegative integer solutions"
else:
k2 = y//u #floor division
x2 = x + k2*v #y-k2*u is solution with smallest y >= 0
y2 = y - k2*u
if x2 < 0 or x1+y1 < x2+y2:
return (x1,y1)
else:
return (x2,y2)
Typical run:
>>> smallestSolution(1001,2743,160485)
(111, 18)
The way it works: first use the extended Euclidean algorithm to find d = gcd(a,b) and one solution, (x,y). All other solutions are of the form (x+k*v,y-k*u) where u = a/d and v = b/d. Since x+y is linear, it has no critical points, hence is minimized in the first quadrant when either x is as small as possible or y is as small as possible. The k above is an arbitrary integer parameter. By appropriate use of floor and ceiling you can locate the integer points with either x as small as possible or y is as small as possible. Just take the one with the smallest sum.
On Edit: My original code used the Python function math.ceiling applied to -x/v. This is problematic for very large integers. I tweaked it so that the ceiling is computed with just int operations. It can now handle arbitrarily large numbers:
>>> a = 236317407839490590865554550063
>>> b = 127372335361192567404918884983
>>> c = 475864993503739844164597027155993229496457605245403456517677648564321
>>> smallestSolution(a,b,c)
(2013668810262278187384582192404963131387, 120334243940259443613787580180)
>>> x,y = _
>>> a*x+b*y
475864993503739844164597027155993229496457605245403456517677648564321
Most of the computation takes place in the running the extended Euclidean algorithm, which is known to be O(min(a,b)).
First let assume a,b,c>0 so:
a.x+b.y = c
x+y = min(xi+yi)
x,y >= 0
a,b,c > 0
------------------------
x = ( c - b.y )/a
y = ( c - a.x )/b
c - a.x >= 0
c - b.y >= 0
c >= b.y
c >= a.x
x <= c/x
y <= c/b
So naive O(n) solution is in C++ like this:
void compute0(int &x,int &y,int a,int b,int c) // naive
{
int xx,yy;
xx=-1; yy=-1;
for (y=0;;y++)
{
x = c - b*y;
if (x<0) break; // y out of range stop
if (x%a) continue; // non integer solution
x/=a; // remember minimal solution
if ((xx<0)||(x+y<=xx+yy)) { xx=x; yy=y; }
}
x=xx; y=yy;
}
if no solution found it returns -1,-1 If you think about the equation a bit then you should realize that min solution will be when x or y is minimal (which one depends on a<b condition) so adding such heuristics we can increase only the minimal coordinate until first solution found. This will speed up considerably the whole thing:
void compute1(int &x,int &y,int a,int b,int c)
{
if (a<=b){ for (x=0,y=c;y>=0;x++,y-=a) if (y%b==0) { y/=b; return; } }
else { for (y=0,x=c;x>=0;y++,x-=b) if (x%a==0) { x/=a; return; } }
x=-1; y=-1;
}
I measured this on my setup:
x y ax+by x+y a=50 b=105 c=500000000
[ 55.910 ms] 10 4761900 500000000 4761910 naive
[ 0.000 ms] 10 4761900 500000000 4761910 opt
x y ax+by x+y a=105 b=50 c=500000000
[ 99.214 ms] 4761900 10 500000000 4761910 naive
[ 0.000 ms] 4761900 10 500000000 4761910 opt
The ~2.0x difference for naive method times is due to a/b=~2.0and selecting worse coordinate to iterate in the second run.
Now just handle special cases when a,b,c are zero (to avoid division by zero)...

An interview question: About Probability

An interview question:
Given a function f(x) that 1/4 times returns 0, 3/4 times returns 1.
Write a function g(x) using f(x) that 1/2 times returns 0, 1/2 times returns 1.
My implementation is:
function g(x) = {
if (f(x) == 0){ // 1/4
var s = f(x)
if( s == 1) {// 3/4 * 1/4
return s // 3/16
} else {
g(x)
}
} else { // 3/4
var k = f(x)
if( k == 0) {// 1/4 * 3/4
return k // 3/16
} else {
g(x)
}
}
}
Am I right? What's your solution?(you can use any language)
If you call f(x) twice in a row, the following outcomes are possible (assuming that
successive calls to f(x) are independent, identically distributed trials):
00 (probability 1/4 * 1/4)
01 (probability 1/4 * 3/4)
10 (probability 3/4 * 1/4)
11 (probability 3/4 * 3/4)
01 and 10 occur with equal probability. So iterate until you get one of those
cases, then return 0 or 1 appropriately:
do
a=f(x); b=f(x);
while (a == b);
return a;
It might be tempting to call f(x) only once per iteration and keep track of the two
most recent values, but that won't work. Suppose the very first roll is 1,
with probability 3/4. You'd loop until the first 0, then return 1 (with probability 3/4).
The problem with your algorithm is that it repeats itself with high probability. My code:
function g(x) = {
var s = f(x) + f(x) + f(x);
// s = 0, probability: 1/64
// s = 1, probability: 9/64
// s = 2, probability: 27/64
// s = 3, probability: 27/64
if (s == 2) return 0;
if (s == 3) return 1;
return g(x); // probability to go into recursion = 10/64, with only 1 additional f(x) calculation
}
I've measured average number of times f(x) was calculated for your algorithm and for mine. For yours f(x) was calculated around 5.3 times per one g(x) calculation. With my algorithm this number reduced to around 3.5. The same is true for other answers so far since they are actually the same algorithm as you said.
P.S.: your definition doesn't mention 'random' at the moment, but probably it is assumed. See my other answer.
Your solution is correct, if somewhat inefficient and with more duplicated logic. Here is a Python implementation of the same algorithm in a cleaner form.
def g ():
while True:
a = f()
if a != f():
return a
If f() is expensive you'd want to get more sophisticated with using the match/mismatch information to try to return with fewer calls to it. Here is the most efficient possible solution.
def g ():
lower = 0.0
upper = 1.0
while True:
if 0.5 < lower:
return 1
elif upper < 0.5:
return 0
else:
middle = 0.25 * lower + 0.75 * upper
if 0 == f():
lower = middle
else:
upper = middle
This takes about 2.6 calls to g() on average.
The way that it works is this. We're trying to pick a random number from 0 to 1, but we happen to stop as soon as we know whether the number is 0 or 1. We start knowing that the number is in the interval (0, 1). 3/4 of the numbers are in the bottom 3/4 of the interval, and 1/4 are in the top 1/4 of the interval. We decide which based on a call to f(x). This means that we are now in a smaller interval.
If we wash, rinse, and repeat enough times we can determine our finite number as precisely as possible, and will have an absolutely equal probability of winding up in any region of the original interval. In particular we have an even probability of winding up bigger than or less than 0.5.
If you wanted you could repeat the idea to generate an endless stream of bits one by one. This is, in fact, provably the most efficient way of generating such a stream, and is the source of the idea of entropy in information theory.
Given a function f(x) that 1/4 times returns 0, 3/4 times returns 1
Taking this statement literally, f(x) if called four times will always return zero once and 1 3 times. This is different than saying f(x) is a probabalistic function and the 0 to 1 ratio will approach 1 to 3 (1/4 vs 3/4) over many iterations. If the first interpretation is valid, than the only valid function for f(x) that will meet the criteria regardless of where in the sequence you start from is the sequence 0111 repeating. (or 1011 or 1101 or 1110 which are the same sequence from a different starting point). Given that constraint,
g()= (f() == f())
should suffice.
As already mentioned your definition is not that good regarding probability. Usually it means that not only probability is good but distribution also. Otherwise you can simply write g(x) which will return 1,0,1,0,1,0,1,0 - it will return them 50/50, but numbers won't be random.
Another cheating approach might be:
var invert = false;
function g(x) {
invert = !invert;
if (invert) return 1-f(x);
return f(x);
}
This solution will be better than all others since it calls f(x) only one time. But the results will not be very random.
A refinement of the same approach used in btilly's answer, achieving an average ~1.85 calls to f() per g() result (further refinement documented below achieves ~1.75, tbilly's ~2.6, Jim Lewis's accepted answer ~5.33). Code appears lower in the answer.
Basically, I generate random integers in the range 0 to 3 with even probability: the caller can then test bit 0 for the first 50/50 value, and bit 1 for a second. Reason: the f() probabilities of 1/4 and 3/4 map onto quarters much more cleanly than halves.
Description of algorithm
btilly explained the algorithm, but I'll do so in my own way too...
The algorithm basically generates a random real number x between 0 and 1, then returns a result depending on which "result bucket" that number falls in:
result bucket result
x < 0.25 0
0.25 <= x < 0.5 1
0.5 <= x < 0.75 2
0.75 <= x 3
But, generating a random real number given only f() is difficult. We have to start with the knowledge that our x value should be in the range 0..1 - which we'll call our initial "possible x" space. We then hone in on an actual value for x:
each time we call f():
if f() returns 0 (probability 1 in 4), we consider x to be in the lower quarter of the "possible x" space, and eliminate the upper three quarters from that space
if f() returns 1 (probability 3 in 4), we consider x to be in the upper three-quarters of the "possible x" space, and eliminate the lower quarter from that space
when the "possible x" space is completely contained by a single result bucket, that means we've narrowed x down to the point where we know which result value it should map to and have no need to get a more specific value for x.
It may or may not help to consider this diagram :-):
"result bucket" cut-offs 0,.25,.5,.75,1
0=========0.25=========0.5==========0.75=========1 "possible x" 0..1
| | . . | f() chooses x < vs >= 0.25
| result 0 |------0.4375-------------+----------| "possible x" .25..1
| | result 1| . . | f() chooses x < vs >= 0.4375
| | | . ~0.58 . | "possible x" .4375..1
| | | . | . | f() chooses < vs >= ~.58
| | ||. | | . | 4 distinct "possible x" ranges
Code
int g() // return 0, 1, 2, or 3
{
if (f() == 0) return 0;
if (f() == 0) return 1;
double low = 0.25 + 0.25 * (1.0 - 0.25);
double high = 1.0;
while (true)
{
double cutoff = low + 0.25 * (high - low);
if (f() == 0)
high = cutoff;
else
low = cutoff;
if (high < 0.50) return 1;
if (low >= 0.75) return 3;
if (low >= 0.50 && high < 0.75) return 2;
}
}
If helpful, an intermediary to feed out 50/50 results one at a time:
int h()
{
static int i;
if (!i)
{
int x = g();
i = x | 4;
return x & 1;
}
else
{
int x = i & 2;
i = 0;
return x ? 1 : 0;
}
}
NOTE: This can be further tweaked by having the algorithm switch from considering an f()==0 result to hone in on the lower quarter, to having it hone in on the upper quarter instead, based on which on average resolves to a result bucket more quickly. Superficially, this seemed useful on the third call to f() when an upper-quarter result would indicate an immediate result of 3, while a lower-quarter result still spans probability point 0.5 and hence results 1 and 2. When I tried it, the results were actually worse. A more complex tuning was needed to see actual benefits, and I ended up writing a brute-force comparison of lower vs upper cutoff for second through eleventh calls to g(). The best result I found was an average of ~1.75, resulting from the 1st, 2nd, 5th and 8th calls to g() seeking low (i.e. setting low = cutoff).
Here is a solution based on central limit theorem, originally due to a friend of mine:
/*
Given a function f(x) that 1/4 times returns 0, 3/4 times returns 1. Write a function g(x) using f(x) that 1/2 times returns 0, 1/2 times returns 1.
*/
#include <iostream>
#include <cstdlib>
#include <ctime>
#include <cstdio>
using namespace std;
int f() {
if (rand() % 4 == 0) return 0;
return 1;
}
int main() {
srand(time(0));
int cc = 0;
for (int k = 0; k < 1000; k++) { //number of different runs
int c = 0;
int limit = 10000; //the bigger the limit, the more we will approach %50 percent
for (int i=0; i<limit; ++i) c+= f();
cc += c < limit*0.75 ? 0 : 1; // c will be 0, with probability %50
}
printf("%d\n",cc); //cc is gonna be around 500
return 0;
}
Since each return of f() represents a 3/4 chance of TRUE, with some algebra we can just properly balance the odds. What we want is another function x() which returns a balancing probability of TRUE, so that
function g() {
return f() && x();
}
returns true 50% of the time.
So let's find the probability of x (p(x)), given p(f) and our desired total probability (1/2):
p(f) * p(x) = 1/2
3/4 * p(x) = 1/2
p(x) = (1/2) / 3/4
p(x) = 2/3
So x() should return TRUE with a probability of 2/3, since 2/3 * 3/4 = 6/12 = 1/2;
Thus the following should work for g():
function g() {
return f() && (rand() < 2/3);
}
Assuming
P(f[x] == 0) = 1/4
P(f[x] == 1) = 3/4
and requiring a function g[x] with the following assumptions
P(g[x] == 0) = 1/2
P(g[x] == 1) = 1/2
I believe the following definition of g[x] is sufficient (Mathematica)
g[x_] := If[f[x] + f[x + 1] == 1, 1, 0]
or, alternatively in C
int g(int x)
{
return f(x) + f(x+1) == 1
? 1
: 0;
}
This is based on the idea that invocations of {f[x], f[x+1]} would produce the following outcomes
{
{0, 0},
{0, 1},
{1, 0},
{1, 1}
}
Summing each of the outcomes we have
{
0,
1,
1,
2
}
where a sum of 1 represents 1/2 of the possible sum outcomes, with any other sum making up the other 1/2.
Edit.
As bdk says - {0,0} is less likely than {1,1} because
1/4 * 1/4 < 3/4 * 3/4
However, I am confused myself because given the following definition for f[x] (Mathematica)
f[x_] := Mod[x, 4] > 0 /. {False -> 0, True -> 1}
or alternatively in C
int f(int x)
{
return (x % 4) > 0
? 1
: 0;
}
then the results obtained from executing f[x] and g[x] seem to have the expected distribution.
Table[f[x], {x, 0, 20}]
{0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0}
Table[g[x], {x, 0, 20}]
{1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1}
This is much like the Monty Hall paradox.
In general.
Public Class Form1
'the general case
'
'twiceThis = 2 is 1 in four chance of 0
'twiceThis = 3 is 1 in six chance of 0
'
'twiceThis = x is 1 in 2x chance of 0
Const twiceThis As Integer = 7
Const numOf As Integer = twiceThis * 2
Private Sub Button1_Click(ByVal sender As System.Object, _
ByVal e As System.EventArgs) Handles Button1.Click
Const tries As Integer = 1000
y = New List(Of Integer)
Dim ct0 As Integer = 0
Dim ct1 As Integer = 0
Debug.WriteLine("")
''show all possible values of fx
'For x As Integer = 1 To numOf
' Debug.WriteLine(fx)
'Next
'test that gx returns 50% 0's and 50% 1's
Dim stpw As New Stopwatch
stpw.Start()
For x As Integer = 1 To tries
Dim g_x As Integer = gx()
'Debug.WriteLine(g_x.ToString) 'used to verify that gx returns 0 or 1 randomly
If g_x = 0 Then ct0 += 1 Else ct1 += 1
Next
stpw.Stop()
'the results
Debug.WriteLine((ct0 / tries).ToString("p1"))
Debug.WriteLine((ct1 / tries).ToString("p1"))
Debug.WriteLine((stpw.ElapsedTicks / tries).ToString("n0"))
End Sub
Dim prng As New Random
Dim y As New List(Of Integer)
Private Function fx() As Integer
'1 in numOf chance of zero being returned
If y.Count = 0 Then
'reload y
y.Add(0) 'fx has only one zero value
Do
y.Add(1) 'the rest are ones
Loop While y.Count < numOf
End If
'return a random value
Dim idx As Integer = prng.Next(y.Count)
Dim rv As Integer = y(idx)
y.RemoveAt(idx) 'remove the value selected
Return rv
End Function
Private Function gx() As Integer
'a function g(x) using f(x) that 50% of the time returns 0
' that 50% of the time returns 1
Dim rv As Integer = 0
For x As Integer = 1 To twiceThis
fx()
Next
For x As Integer = 1 To twiceThis
rv += fx()
Next
If rv = twiceThis Then Return 1 Else Return 0
End Function
End Class

Algorithm for iterating over an outward spiral on a discrete 2D grid from the origin

For example, here is the shape of intended spiral (and each step of the iteration)
y
|
|
16 15 14 13 12
17 4 3 2 11
-- 18 5 0 1 10 --- x
19 6 7 8 9
20 21 22 23 24
|
|
Where the lines are the x and y axes.
Here would be the actual values the algorithm would "return" with each iteration (the coordinates of the points):
[0,0],
[1,0], [1,1], [0,1], [-1,1], [-1,0], [-1,-1], [0,-1], [1,-1],
[2,-1], [2,0], [2,1], [2,2], [1,2], [0,2], [-1,2], [-2,2], [-2,1], [-2,0]..
etc.
I've tried searching, but I'm not exactly sure what to search for exactly, and what searches I've tried have come up with dead ends.
I'm not even sure where to start, other than something messy and inelegant and ad-hoc, like creating/coding a new spiral for each layer.
Can anyone help me get started?
Also, is there a way that can easily switch between clockwise and counter-clockwise (the orientation), and which direction to "start" the spiral from? (the rotation)
Also, is there a way to do this recursively?
My application
I have a sparse grid filled with data points, and I want to add a new data point to the grid, and have it be "as close as possible" to a given other point.
To do that, I'll call grid.find_closest_available_point_to(point), which will iterate over the spiral given above and return the first position that is empty and available.
So first, it'll check point+[0,0] (just for completeness's sake). Then it'll check point+[1,0]. Then it'll check point+[1,1]. Then point+[0,1], etc. And return the first one for which the position in the grid is empty (or not occupied already by a data point).
There is no upper bound to grid size.
There's nothing wrong with direct, "ad-hoc" solution. It can be clean enough too.
Just notice that spiral is built from segments. And you can get next segment from current one rotating it by 90 degrees. And each two rotations, length of segment grows by 1.
edit Illustration, those segments numbered
... 11 10
7 7 7 7 6 10
8 3 3 2 6 10
8 4 . 1 6 10
8 4 5 5 5 10
8 9 9 9 9 9
// (di, dj) is a vector - direction in which we move right now
int di = 1;
int dj = 0;
// length of current segment
int segment_length = 1;
// current position (i, j) and how much of current segment we passed
int i = 0;
int j = 0;
int segment_passed = 0;
for (int k = 0; k < NUMBER_OF_POINTS; ++k) {
// make a step, add 'direction' vector (di, dj) to current position (i, j)
i += di;
j += dj;
++segment_passed;
System.out.println(i + " " + j);
if (segment_passed == segment_length) {
// done with current segment
segment_passed = 0;
// 'rotate' directions
int buffer = di;
di = -dj;
dj = buffer;
// increase segment length if necessary
if (dj == 0) {
++segment_length;
}
}
}
To change original direction, look at original values of di and dj. To switch rotation to clockwise, see how those values are modified.
Here's a stab at it in C++, a stateful iterator.
class SpiralOut{
protected:
unsigned layer;
unsigned leg;
public:
int x, y; //read these as output from next, do not modify.
SpiralOut():layer(1),leg(0),x(0),y(0){}
void goNext(){
switch(leg){
case 0: ++x; if(x == layer) ++leg; break;
case 1: ++y; if(y == layer) ++leg; break;
case 2: --x; if(-x == layer) ++leg; break;
case 3: --y; if(-y == layer){ leg = 0; ++layer; } break;
}
}
};
Should be about as efficient as it gets.
This is the javascript solution based on the answer at
Looping in a spiral
var x = 0,
y = 0,
delta = [0, -1],
// spiral width
width = 6,
// spiral height
height = 6;
for (i = Math.pow(Math.max(width, height), 2); i>0; i--) {
if ((-width/2 < x && x <= width/2)
&& (-height/2 < y && y <= height/2)) {
console.debug('POINT', x, y);
}
if (x === y
|| (x < 0 && x === -y)
|| (x > 0 && x === 1-y)){
// change direction
delta = [-delta[1], delta[0]]
}
x += delta[0];
y += delta[1];
}
fiddle: http://jsfiddle.net/N9gEC/18/
This problem is best understood by analyzing how changes coordinates of spiral corners. Consider this table of first 8 spiral corners (excluding origin):
x,y | dx,dy | k-th corner | N | Sign |
___________________________________________
1,0 | 1,0 | 1 | 1 | +
1,1 | 0,1 | 2 | 1 | +
-1,1 | -2,0 | 3 | 2 | -
-1,-1 | 0,-2 | 4 | 2 | -
2,-1 | 3,0 | 5 | 3 | +
2,2 | 0,3 | 6 | 3 | +
-2,2 | -4,0 | 7 | 4 | -
-2,-2 | 0,-4 | 8 | 4 | -
By looking at this table we can calculate X,Y of k-th corner given X,Y of (k-1) corner:
N = INT((1+k)/2)
Sign = | +1 when N is Odd
| -1 when N is Even
[dx,dy] = | [N*Sign,0] when k is Odd
| [0,N*Sign] when k is Even
[X(k),Y(k)] = [X(k-1)+dx,Y(k-1)+dy]
Now when you know coordinates of k and k+1 spiral corner you can get all data points in between k and k+1 by simply adding 1 or -1 to x or y of last point.
Thats it.
good luck.
I would solve it using some math. Here is Ruby code (with input and output):
(0..($*.pop.to_i)).each do |i|
j = Math.sqrt(i).round
k = (j ** 2 - i).abs - j
p = [k, -k].map {|l| (l + j ** 2 - i - (j % 2)) * 0.5 * (-1) ** j}.map(&:to_i)
puts "p => #{p[0]}, #{p[1]}"
end
E.g.
$ ruby spiral.rb 10
p => 0, 0
p => 1, 0
p => 1, 1
p => 0, 1
p => -1, 1
p => -1, 0
p => -1, -1
p => 0, -1
p => 1, -1
p => 2, -1
p => 2, 0
And golfed version:
p (0..$*.pop.to_i).map{|i|j=Math.sqrt(i).round;k=(j**2-i).abs-j;[k,-k].map{|l|(l+j**2-i-j%2)*0.5*(-1)**j}.map(&:to_i)}
Edit
First try to approach the problem functionally. What do you need to know, at each step, to get to the next step?
Focus on plane's first diagonal x = y. k tells you how many steps you must take before touching it: negative values mean you have to move abs(k) steps vertically, while positive mean you have to move k steps horizontally.
Now focus on the length of the segment you're currently in (spiral's vertices - when the inclination of segments change - are considered as part of the "next" segment). It's 0 the first time, then 1 for the next two segments (= 2 points), then 2 for the next two segments (= 4 points), etc. It changes every two segments and each time the number of points part of that segments increase. That's what j is used for.
Accidentally, this can be used for getting another bit of information: (-1)**j is just a shorthand to "1 if you're decreasing some coordinate to get to this step; -1 if you're increasing" (Note that only one coordinate is changed at each step). Same holds for j%2, just replace 1 with 0 and -1 with 1 in this case. This mean they swap between two values: one for segments "heading" up or right and one for those going down or left.
This is a familiar reasoning, if you're used to functional programming: the rest is just a little bit of simple math.
It can be done in a fairly straightforward way using recursion. We just need some basic 2D vector math and tools for generating and mapping over (possibly infinite) sequences:
// 2D vectors
const add = ([x0, y0]) => ([x1, y1]) => [x0 + x1, y0 + y1];
const rotate = θ => ([x, y]) => [
Math.round(x * Math.cos(θ) - y * Math.sin(θ)),
Math.round(x * Math.sin(θ) + y * Math.cos(θ))
];
// Iterables
const fromGen = g => ({ [Symbol.iterator]: g });
const range = n => [...Array(n).keys()];
const map = f => it =>
fromGen(function*() {
for (const v of it) {
yield f(v);
}
});
And now we can express a spiral recursively by generating a flat line, plus a rotated (flat line, plus a rotated (flat line, plus a rotated ...)):
const spiralOut = i => {
const n = Math.floor(i / 2) + 1;
const leg = range(n).map(x => [x, 0]);
const transform = p => add([n, 0])(rotate(Math.PI / 2)(p));
return fromGen(function*() {
yield* leg;
yield* map(transform)(spiralOut(i + 1));
});
};
Which produces an infinite list of the coordinates you're interested in. Here's a sample of the contents:
const take = n => it =>
fromGen(function*() {
for (let v of it) {
if (--n < 0) break;
yield v;
}
});
const points = [...take(5)(spiralOut(0))];
console.log(points);
// => [[0,0],[1,0],[1,1],[0,1],[-1,1]]
You can also negate the rotation angle to go in the other direction, or play around with the transform and leg length to get more complex shapes.
For example, the same technique works for inward spirals as well. It's just a slightly different transform, and a slightly different scheme for changing the length of the leg:
const empty = [];
const append = it1 => it2 =>
fromGen(function*() {
yield* it1;
yield* it2;
});
const spiralIn = ([w, h]) => {
const leg = range(w).map(x => [x, 0]);
const transform = p => add([w - 1, 1])(rotate(Math.PI / 2)(p));
return w * h === 0
? empty
: append(leg)(
fromGen(function*() {
yield* map(transform)(spiralIn([h - 1, w]));
})
);
};
Which produces (this spiral is finite, so we don't need to take some arbitrary number):
const points = [...spiralIn([3, 3])];
console.log(points);
// => [[0,0],[1,0],[2,0],[2,1],[2,2],[1,2],[0,2],[0,1],[1,1]]
Here's the whole thing together as a live snippet if you want play around with it:
// 2D vectors
const add = ([x0, y0]) => ([x1, y1]) => [x0 + x1, y0 + y1];
const rotate = θ => ([x, y]) => [
Math.round(x * Math.cos(θ) - y * Math.sin(θ)),
Math.round(x * Math.sin(θ) + y * Math.cos(θ))
];
// Iterables
const fromGen = g => ({ [Symbol.iterator]: g });
const range = n => [...Array(n).keys()];
const map = f => it =>
fromGen(function*() {
for (const v of it) {
yield f(v);
}
});
const take = n => it =>
fromGen(function*() {
for (let v of it) {
if (--n < 0) break;
yield v;
}
});
const empty = [];
const append = it1 => it2 =>
fromGen(function*() {
yield* it1;
yield* it2;
});
// Outward spiral
const spiralOut = i => {
const n = Math.floor(i / 2) + 1;
const leg = range(n).map(x => [x, 0]);
const transform = p => add([n, 0])(rotate(Math.PI / 2)(p));
return fromGen(function*() {
yield* leg;
yield* map(transform)(spiralOut(i + 1));
});
};
// Test
{
const points = [...take(5)(spiralOut(0))];
console.log(JSON.stringify(points));
}
// Inward spiral
const spiralIn = ([w, h]) => {
const leg = range(w).map(x => [x, 0]);
const transform = p => add([w - 1, 1])(rotate(Math.PI / 2)(p));
return w * h === 0
? empty
: append(leg)(
fromGen(function*() {
yield* map(transform)(spiralIn([h - 1, w]));
})
);
};
// Test
{
const points = [...spiralIn([3, 3])];
console.log(JSON.stringify(points));
}
Here is a Python implementation based on the answer by #mako.
def spiral_iterator(iteration_limit=999):
x = 0
y = 0
layer = 1
leg = 0
iteration = 0
yield 0, 0
while iteration < iteration_limit:
iteration += 1
if leg == 0:
x += 1
if (x == layer):
leg += 1
elif leg == 1:
y += 1
if (y == layer):
leg += 1
elif leg == 2:
x -= 1
if -x == layer:
leg += 1
elif leg == 3:
y -= 1
if -y == layer:
leg = 0
layer += 1
yield x, y
Running this code:
for x, y in spiral_iterator(10):
print(x, y)
Yields:
0 0
1 0
1 1
0 1
-1 1
-1 0
-1 -1
0 -1
1 -1
2 -1
2 0
Try searching for either parametric or polar equations. Both are suitable to plotting spirally things. Here's a page that has plenty of examples, with pictures (and equations). It should give you some more ideas of what to look for.
I've done pretty much the same thin as a training exercise, with some differences in the output and the spiral orientation, and with an extra requirement, that the functions spatial complexity has to be O(1).
After think for a while I came to the idea that by knowing where does the spiral start and the position I was calculating the value for, I could simplify the problem by subtracting all the complete "circles" of the spiral, and then just calculate a simpler value.
Here is my implementation of that algorithm in ruby:
def print_spiral(n)
(0...n).each do |y|
(0...n).each do |x|
printf("%02d ", get_value(x, y, n))
end
print "\n"
end
end
def distance_to_border(x, y, n)
[x, y, n - 1 - x, n - 1 - y].min
end
def get_value(x, y, n)
dist = distance_to_border(x, y, n)
initial = n * n - 1
(0...dist).each do |i|
initial -= 2 * (n - 2 * i) + 2 * (n - 2 * i - 2)
end
x -= dist
y -= dist
n -= dist * 2
if y == 0 then
initial - x # If we are in the upper row
elsif y == n - 1 then
initial - n - (n - 2) - ((n - 1) - x) # If we are in the lower row
elsif x == n - 1 then
initial - n - y + 1# If we are in the right column
else
initial - 2 * n - (n - 2) - ((n - 1) - y - 1) # If we are in the left column
end
end
print_spiral 5
This is not exactly the thing you asked for, but I believe it'll help you to think your problem
I had a similar problem, but I didn't want to loop over the entire spiral each time to find the next new coordinate. The requirement is that you know your last coordinate.
Here is what I came up with with a lot of reading up on the other solutions:
function getNextCoord(coord) {
// required info
var x = coord.x,
y = coord.y,
level = Math.max(Math.abs(x), Math.abs(y));
delta = {x:0, y:0};
// calculate current direction (start up)
if (-x === level)
delta.y = 1; // going up
else if (y === level)
delta.x = 1; // going right
else if (x === level)
delta.y = -1; // going down
else if (-y === level)
delta.x = -1; // going left
// check if we need to turn down or left
if (x > 0 && (x === y || x === -y)) {
// change direction (clockwise)
delta = {x: delta.y,
y: -delta.x};
}
// move to next coordinate
x += delta.x;
y += delta.y;
return {x: x,
y: y};
}
coord = {x: 0, y: 0}
for (i = 0; i < 40; i++) {
console.log('['+ coord.x +', ' + coord.y + ']');
coord = getNextCoord(coord);
}
Still not sure if it is the most elegant solution. Perhaps some elegant maths could remove some of the if statements. Some limitations would be needing some modification to change spiral direction, doesn't take into account non-square spirals and can't spiral around a fixed coordinate.
I have an algorithm in java that outputs a similar output to yours, except that it prioritizes the number on the right, then the number on the left.
public static String[] rationals(int amount){
String[] numberList=new String[amount];
int currentNumberLeft=0;
int newNumberLeft=0;
int currentNumberRight=0;
int newNumberRight=0;
int state=1;
numberList[0]="("+newNumberLeft+","+newNumberRight+")";
boolean direction=false;
for(int count=1;count<amount;count++){
if(direction==true&&newNumberLeft==state){direction=false;state=(state<=0?(-state)+1:-state);}
else if(direction==false&&newNumberRight==state){direction=true;}
if(direction){newNumberLeft=currentNumberLeft+sign(state);}else{newNumberRight=currentNumberRight+sign(state);}
currentNumberLeft=newNumberLeft;
currentNumberRight=newNumberRight;
numberList[count]="("+newNumberLeft+","+newNumberRight+")";
}
return numberList;
}
Here's the algorithm. It rotates clockwise, but could easily rotate anticlockwise, with a few alterations. I made it in just under an hour.
// spiral_get_value(x,y);
sx = argument0;
sy = argument1;
a = max(sqrt(sqr(sx)),sqrt(sqr(sy)));
c = -b;
d = (b*2)+1;
us = (sy==c and sx !=c);
rs = (sx==b and sy !=c);
bs = (sy==b and sx !=b);
ls = (sx==c and sy !=b);
ra = rs*((b)*2);
ba = bs*((b)*4);
la = ls*((b)*6);
ax = (us*sx)+(bs*-sx);
ay = (rs*sy)+(ls*-sy);
add = ra+ba+la+ax+ay;
value = add+sqr(d-2)+b;
return(value);`
It will handle any x / y values (infinite).
It's written in GML (Game Maker Language), but the actual logic is sound in any programming language.
The single line algorithm only has 2 variables (sx and sy) for the x and y inputs. I basically expanded brackets, a lot. It makes it easier for you to paste it into notepad and change 'sx' for your x argument / variable name and 'sy' to your y argument / variable name.
`// spiral_get_value(x,y);
sx = argument0;
sy = argument1;
value = ((((sx==max(sqrt(sqr(sx)),sqrt(sqr(sy))) and sy !=(-1*max(sqrt(sqr(sx)),sqrt(sqr(sy))))))*((max(sqrt(sqr(sx)),sqrt(sqr(sy))))*2))+(((sy==max(sqrt(sqr(sx)),sqrt(sqr(sy))) and sx !=max(sqrt(sqr(sx)),sqrt(sqr(sy)))))*((max(sqrt(sqr(sx)),sqrt(sqr(sy))))*4))+(((sx==(-1*max(sqrt(sqr(sx)),sqrt(sqr(sy)))) and sy !=max(sqrt(sqr(sx)),sqrt(sqr(sy)))))*((max(sqrt(sqr(sx)),sqrt(sqr(sy))))*6))+((((sy==(-1*max(sqrt(sqr(sx)),sqrt(sqr(sy)))) and sx !=(-1*max(sqrt(sqr(sx)),sqrt(sqr(sy))))))*sx)+(((sy==max(sqrt(sqr(sx)),sqrt(sqr(sy))) and sx !=max(sqrt(sqr(sx)),sqrt(sqr(sy)))))*-sx))+(((sx==max(sqrt(sqr(sx)),sqrt(sqr(sy))) and sy !=(-1*max(sqrt(sqr(sx)),sqrt(sqr(sy))))))*sy)+(((sx==(-1*max(sqrt(sqr(sx)),sqrt(sqr(sy)))) and sy !=max(sqrt(sqr(sx)),sqrt(sqr(sy)))))*-sy))+sqr(((max(sqrt(sqr(sx)),sqrt(sqr(sy)))*2)+1)-2)+max(sqrt(sqr(sx)),sqrt(sqr(sy)));
return(value);`
I know the reply is awfully late :D but i hope it helps future visitors.

Resources